{"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. \n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]}
more »
« less
DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope
Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope)
more »
« less
- Award ID(s):
- 1847141
- PAR ID:
- 10644633
- Publisher / Repository:
- Dryad
- Date Published:
- Edition / Version:
- 7
- Subject(s) / Keyword(s):
- FOS: Electrical engineering, electronic engineering, information engineering FOS: Electrical engineering, electronic engineering, information engineering Lensless miniscope Calcium imaging ADMM Fluorescence microscopy Miniaturized microscope physics-informed Integrated microscope Deep learning
- Format(s):
- Medium: X Size: 4269376866 bytes
- Size(s):
- 4269376866 bytes
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Head-mounted miniaturized two-photon microscopes are powerful tools to record neural activity with cellular resolution deep in the mouse brain during unrestrained, free-moving behavior. Two-photon microscopy, however, is traditionally limited in imaging frame rate due to the necessity of raster scanning the laser excitation spot over a large field-of-view (FOV). Here, we present two multiplexed miniature two-photon microscopes (M-MINI2Ps) to increase the imaging frame rate while preserving the spatial resolution. Two different FOVs are imaged simultaneously and then demixed temporally or computationally. We demonstrate large-scale (500×500 µm2 FOV) multiplane calcium imaging in visual cortex and prefrontal cortex in freely moving mice during spontaneous exploration, social behavior, and auditory stimulus. Furthermore, the increased speed of M-MINI2Ps also enables two-photon voltage imaging at 400 Hz over a 380×150 µm2 FOV in freely moving mice. M-MINI2Ps have compact footprints and are compatible with the open-source MINI2P. M-MINI2Ps, together with their design principles, allow the capture of faster physiological dynamics and population recordings over a greater volume than currently possible in freely moving mice, and will be a powerful tool in systems neuroscience. # Data for: Multiplexed miniaturized two-photon microscopy (M-MINI2Ps) Dataset DOI: [10.5061/dryad.kd51c5bkp](10.5061/dryad.kd51c5bkp) ## Description of the data and file structure Calcium and Voltage imaging datasets from Multiplexed Miniaturized Two-Photon Microscopy (M-MINI2P) ### Files and variables #### File: TM_MINI2P_Voltage_Cranial_VisualCortex.zip **Description:** Voltage imaging dataset acquired in mouse primary visual cortex (V1) using the TM-MINI2P system through a cranial window preparation. This .zip file contains two Tif files, corresponding to the top field of view (FOV) and the bottom field of view (FOV) of the demultiplexed recordings. #### File: TM_MINI2P_Calcium_GRIN_PFC_Auditory_Free_vs_Headfix.zip **Description:** Volumetric calcium imaging dataset from mouse prefrontal cortex (PFC) using the TM-MINI2P system with a GRIN lens implant, comparing neural responses during sound stimulation versus quiet periods, under both freely moving and head-fixed conditions. This .zip file contains 12 Tif files: top and bottom fields of view (FOVs) of the multiplexed recordings at three imaging depths (100 μm, 155 μm, and 240 μm from the end of the implanted GRIN lens), with six files from freely moving conditions and six files from head-fixed conditions. #### File: CM_MINI2P_Calcium_Cranial_VisualCortex_SocialBehavior.zip **Description:** Calcium imaging dataset from mouse primary visual cortex (V1) using the CM-MINI2P system through a cranial window, recorded during social interaction and isolated conditions. This .zip file contains 6 Tif files: multiplexed recordings from the top and bottom fields of view (FOVs), and single-FOV recordings at two imaging depths (170 µm and 250 µm). #### File: TM_MINI2P_Calcium_Cranial_VisualCortex.zip **Description:** Multi-depth calcium imaging dataset from mouse primary visual cortex (V1) using the TM-MINI2P system through a cranial window during spontaneous exploration. This .zip file contains 6 Tif files: demultiplexed recordings from two fields of view (FOV1 and FOV2) at three imaging depths (110 µm, 170 µm, and 230 µm). ## Code/software All datasets are in .tiff format and ImageJ can be used for visualization. Analysis of calcium imaging data and voltage imaging data were analyzed using CaImAn and Volpy, respectively, which are open-source packages available at [https://github.com/flatironinstitute/CaImAn](https://github.com/flatironinstitute/CaImAn).more » « less
-
Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictive models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition. MEG Data MEG data is in FIFF format and can be opened with MNE-Python. Data has been directly converted from the acquisition device native format without any preprocessing. Events contained in the data indicate the stimuli in numerical order. Subjects R2650 and R2652 heard stimulus 11b instead of 11. Predictor Variables The original audio files are copyrighted and cannot be shared, but the make_audio folder contains make_clips.py which can be used to extract the exact clips from the commercially available audiobook (ISBN 978-1480555280). The predictors directory contains all the predictors used in the original study as pickled eelbrain objects. They can be loaded in Python with the eelbrain.load.unpickle function. The TextGrids directory contains the TextGrids aligned to the audio files. Source Localization The localization.zip file contains files needed for source localization. Structural brain models used in the published analysis are reconstructed by scaling the FreeSurfer fsaverage brain (distributed with FreeSurfer) based on each subject's `MRI scaling parameters.cfg` file. This can be done using the `mne.scale_mri` function. Each subject's MEG folder contains a `subject-trans.fif` file which contains the coregistration between MEG sensor space and (scaled) MRI space, which is used to compute the forward solution.more » « less
-
Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV). Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and an efficient physics-informed deep learning model that markedly reduces computational demand. Parts of the 3D object can be individually reconstructed and combined. Our deep learning algorithm can reconstruct object volumes over 4 millimeters by 6 millimeters by 0.6 millimeters. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint.more » « less
-
{"Abstract":["This data set contains all classifications that the Gravity Spy Machine Learning model for LIGO glitches from the first three observing runs (O1, O2 and O3, where O3 is split into O3a and O3b). Gravity Spy classified all noise events identified by the Omicron trigger pipeline in which Omicron identified that the signal-to-noise ratio was above 7.5 and the peak frequency of the noise event was between 10 Hz and 2048 Hz. To classify noise events, Gravity Spy made Omega scans of every glitch consisting of 4 different durations, which helps capture the morphology of noise events that are both short and long in duration.<\/p>\n\nThere are 22 classes used for O1 and O2 data (including No_Glitch and None_of_the_Above), while there are two additional classes used to classify O3 data.<\/p>\n\nFor O1 and O2, the glitch classes were: 1080Lines, 1400Ripples, Air_Compressor, Blip, Chirp, Extremely_Loud, Helix, Koi_Fish, Light_Modulation, Low_Frequency_Burst, Low_Frequency_Lines, No_Glitch, None_of_the_Above, Paired_Doves, Power_Line, Repeating_Blips, Scattered_Light, Scratchy, Tomte, Violin_Mode, Wandering_Line, Whistle<\/p>\n\nFor O3, the glitch classes were: 1080Lines, 1400Ripples, Air_Compressor, Blip, Blip_Low_Frequency<\/strong>, Chirp, Extremely_Loud, Fast_Scattering<\/strong>, Helix, Koi_Fish, Light_Modulation, Low_Frequency_Burst, Low_Frequency_Lines, No_Glitch, None_of_the_Above, Paired_Doves, Power_Line, Repeating_Blips, Scattered_Light, Scratchy, Tomte, Violin_Mode, Wandering_Line, Whistle<\/p>\n\nIf you would like to download the Omega scans associated with each glitch, then you can use the gravitational-wave data-analysis tool GWpy. If you would like to use this tool, please install anaconda if you have not already and create a virtual environment using the following command<\/p>\n\n```conda create --name gravityspy-py38 -c conda-forge python=3.8 gwpy pandas psycopg2 sqlalchemy```<\/p>\n\nAfter downloading one of the CSV files for a specific era and interferometer, please run the following Python script if you would like to download the data associated with the metadata in the CSV file. We recommend not trying to download too many images at one time. For example, the script below will read data on Hanford glitches from O2 that were classified by Gravity Spy and filter for only glitches that were labelled as Blips with 90% confidence or higher, and then download the first 4 rows of the filtered table.<\/p>\n\n```<\/p>\n\nfrom gwpy.table import GravitySpyTable<\/p>\n\nH1_O2 = GravitySpyTable.read('H1_O2.csv')<\/p>\n\nH1_O2[(H1_O2["ml_label"] == "Blip") & (H1_O2["ml_confidence"] > 0.9)]<\/p>\n\nH1_O2[0:4].download(nproc=1)<\/p>\n\n```<\/p>\n\nEach of the columns in the CSV files are taken from various different inputs: <\/p>\n\n[\u2018event_time\u2019, \u2018ifo\u2019, \u2018peak_time\u2019, \u2018peak_time_ns\u2019, \u2018start_time\u2019, \u2018start_time_ns\u2019, \u2018duration\u2019, \u2018peak_frequency\u2019, \u2018central_freq\u2019, \u2018bandwidth\u2019, \u2018channel\u2019, \u2018amplitude\u2019, \u2018snr\u2019, \u2018q_value\u2019] contain metadata about the signal from the Omicron pipeline. <\/p>\n\n[\u2018gravityspy_id\u2019] is the unique identifier for each glitch in the dataset. <\/p>\n\n[\u20181400Ripples\u2019, \u20181080Lines\u2019, \u2018Air_Compressor\u2019, \u2018Blip\u2019, \u2018Chirp\u2019, \u2018Extremely_Loud\u2019, \u2018Helix\u2019, \u2018Koi_Fish\u2019, \u2018Light_Modulation\u2019, \u2018Low_Frequency_Burst\u2019, \u2018Low_Frequency_Lines\u2019, \u2018No_Glitch\u2019, \u2018None_of_the_Above\u2019, \u2018Paired_Doves\u2019, \u2018Power_Line\u2019, \u2018Repeating_Blips\u2019, \u2018Scattered_Light\u2019, \u2018Scratchy\u2019, \u2018Tomte\u2019, \u2018Violin_Mode\u2019, \u2018Wandering_Line\u2019, \u2018Whistle\u2019] contain the machine learning confidence for a glitch being in a particular Gravity Spy class (the confidence in all these columns should sum to unity). <\/p>\n\n[\u2018ml_label\u2019, \u2018ml_confidence\u2019] provide the machine-learning predicted label for each glitch, and the machine learning confidence in its classification. <\/p>\n\n[\u2018url1\u2019, \u2018url2\u2019, \u2018url3\u2019, \u2018url4\u2019] are the links to the publicly-available Omega scans for each glitch. \u2018url1\u2019 shows the glitch for a duration of 0.5 seconds, \u2018url2\u2019 for 1 seconds, \u2018url3\u2019 for 2 seconds, and \u2018url4\u2019 for 4 seconds.<\/p>\n\n```<\/p>\n\nFor the most recently uploaded training set used in Gravity Spy machine learning algorithms, please see Gravity Spy Training Set on Zenodo.<\/p>\n\nFor detailed information on the training set used for the original Gravity Spy machine learning paper, please see Machine learning for Gravity Spy: Glitch classification and dataset on Zenodo. <\/p>"]}more » « less
An official website of the United States government
