This dataset contains the data used in the paper (arXiv:2301.02398) on the estimation and subtraction of glitches in gravitational wave data using an adaptive spline fitting method called SHAPES . Each .zip file corresponds to one of the glitches considered in the paper. The name of the class to which the glitch belongs (e.g., "Blip") is included in the name of the corresponding .zip file (e.g., BLIP_SHAPESRun_20221229T125928.zip). When uncompressed, each .zip file expands to a folder containing the following. An HDF5 file containing the Whitened gravitational wave (GW) strain data in which the glitch appeared. The data has been whitened using a proprietary code. The original (unwhitened) strain data file is available from gwosc.org. The name of the original data file is the part preceding the token '__dtrndWhtnBndpss' in the name of the file.A JSON file containing information pertinent to the glitch that was analyzed (e.g., start and stop indices in the whitened data time series).A set of .mat files containing segmented estimates of the glitch as described in the paper. A MATLAB script, plotglitch.m, has been provided that plots, for a given glitch folder name, the data segment that was analyzed in the paper. Another script, plotshapesestimate.m, plots the estimated glitch. These scripts require the JSONLab package.
more »
« less
Time series of swashzone currents during DuneX
This archive contains field data used to investigate swash velocities by Britt Raubenheimer, Steve Elgar, and Alexandra Muscalus. The quality controlled cross-shore velocity time series are in the .zip folder "u", and the quality-controlled alongshore velocity time series are in the .zip folder "v". Details about the files and data are provided in README.txt. For further questions, please contact Britt Raubenheimer at braubenheimer@whoi.edu, Steve Elgar at elgar@whoi.edu, or Alexandra Muscalus at alexandra.muscalus@whoi.edu Support was provided by the National Science Foundation, the US Coastal Research Program, and the WHOI Build the Base Program.
more »
« less
- PAR ID:
- 10593133
- Publisher / Repository:
- Zenodo
- Date Published:
- Format(s):
- Medium: X
- Right(s):
- Creative Commons Attribution 4.0 International
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This archive contains 2-min mean remotely sensed surface currents generated using optical imagery and PIV (Dooley et al., 2024) from field experiments (2013, 2018, 2021, 2022) occurring at the U.S. Army Corps of Engineers Field Research Facility, in Duck, NC. The README.txt file contains information regarding variables found in the MATLAB (PVLAB_PIV_SurfaceFlows.mat) file, including estimate locations, units, and timestamps. Estimates were generated at times with appropriate conditions for remote sensing (e.g., sufficient foam tracer) that were within 1-day of measured bathymetry at the field site. Additional data for the field site (obtained by the USACE Field Research Facility or by NOAA) including the measured bathymetry and wave conditions can be found at: https://chlthredds.erdc.dren.mil/thredds/catalog/frf/catalog.html Remote sensing estimates are not perfect and errors in filtering out bad data are possible. Please reach out to Ciara Dooley at cdooley@whoi.edu, Steve Elgar at selgar@mac.com, and Britt Raubenheimer at braubenheimer@whoi.edu if you have any questions. Additionally, the optical imagery (large dataset, many TBs) used to generate flow estimates can be accessed by contacting the authors.more » « less
-
This folder contains original data, data processing code, and demo code for the paper entitled "Quantum receiver enhanced by adaptive learning" published in Light: Science & Applications, DOI: 10.1038/s41377-022-01039-5. Please contact chaohancui@arizona.edu if you have questions or other concerns.more » « less
-
{"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. \n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]}more » « less
-
This resource contains source code and select data products behind the following Master's Thesis: Platt, L. (2024). Basins modulate signatures of river salinization (Master's thesis). University of Wisconsin-Madison, Freshwater and Marine Sciences. The source code represents an R-based data processing and modeling pipeline written using the R package "targets". Some of the folders in the source code zipfile are intentionally left empty (except for a hidden file ".placeholder") in order for the code repository to be setup with the required folder structure. To execute this code, download the zip folder, unzip, and open the salt-modeling-data.Rproj file. Then, reference the instructions in the README.md file for installing packages, building the pipeline, and examining the results. Newer versions of this repository may be updated in GitHub at github.com/lindsayplatt/salt-modeling-data. In addition to the source code, this resource contains three data files containing intermediate products of the pipeline. The first two represent data prepared for the random forest modeling. Data download and processing were completed in pipeline phases 1 - 5, and the random forest modeling was completed in phase 6 (see source code). site_attributes.csv which contains the USGS gage site numbers and their associated basin attributes site_classifications.csv which contains the classification of a site for both episodic signatures ("Episodic" or "Not episodic") and baseflow salinization signatures ("positive", "none", "negative", or NA). Note that an NA in the baseflow classification column means that the site did not meet minimum data requirements for calculating a trend and was not used in the random forest model for baseflow salinization. site_attribute_details.csv contains a table of each attribute shorthand used as column names in site_attributes.csv and their names, units, description, and data source.more » « less
An official website of the United States government
