Abstract Background Determining cell identity in volumetric images of tagged neuronal nuclei is an ongoing challenge in contemporary neuroscience. Frequently, cell identity is determined by aligning and matching tags to an “atlas” of labeled neuronal positions and other identifying characteristics. Previous analyses of such C. elegans  datasets have been hampered by the limited accuracy of such atlases, especially for neurons present in the ventral nerve cord, and also by time-consuming manual elements of the alignment process. Results We present a novel automated alignment method for sparse and incomplete point clouds of the sort resulting from typical C. elegans  fluorescence microscopy datasets. This method involves a tunable learning parameter and a kernel that enforces biologically realistic deformation. We also present a pipeline for creating alignment atlases from datasets of the recently developed NeuroPAL transgene. In combination, these advances allow us to label neurons in volumetric images with confidence much higher than previous methods. Conclusions We release, to the best of our knowledge, the most complete full-body C. elegans  3D positional neuron atlas, incorporating positional variability derived from at least 7 animals per neuron, for the purposes of cell-type identity prediction for myriad applications (e.g., imaging neuronal activity, gene expression, and cell-fate). 
                        more » 
                        « less   
                    
                            
                            Automated cell annotation in multi-cell images using an improved CRF_ID algorithm
                        
                    
    
            Cell identification is an important yet difficult process in data analysis of biological images. Previously, we developed an automated cell identification method called CRF_ID and demonstrated its high performance in C. elegans whole-brain images (Chaudhary et al, 2021). However, because the method was optimized for whole-brain imaging, comparable performance could not be guaranteed for application in commonly used C. elegans multi-cell images that display a subpopulation of cells. Here, we present an advance CRF_ID 2.0 that expands the generalizability of the method to multi-cell imaging beyond whole-brain imaging. To illustrate the application of the advance, we show the characterization of CRF_ID 2.0 in multi-cell imaging and cell-specific gene expression analysis in C. elegans. This work demonstrates that high accuracy automated cell annotation in multi-cell imaging can expedite cell identification and reduce its subjectivity in C. elegans and potentially other biological images of various origins. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1764406
- PAR ID:
- 10540191
- Publisher / Repository:
- elife
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract This paper presents a method for time-lapse 3D cell analysis. Specifically, we consider the problem of accurately localizing and quantitatively analyzing sub-cellular features, and for tracking individual cells from time-lapse 3D confocal cell image stacks. The heterogeneity of cells and the volume of multi-dimensional images presents a major challenge for fully automated analysis of morphogenesis and development of cells. This paper is motivated by the pavement cell growth process, and building a quantitative morphogenesis model. We propose a deep feature based segmentation method to accurately detect and label each cell region. An adjacency graph based method is used to extract sub-cellular features of the segmented cells. Finally, the robust graph based tracking algorithm using multiple cell features is proposed for associating cells at different time instances. We also demonstrate the generality of our tracking method on C. elegans fluorescent nuclei imagery. Extensive experiment results are provided and demonstrate the robustness of the proposed method. The code is available on and the method is available as a service through the BisQue portal.more » « less
- 
            LED array microscopy is an emerging platform for computational imaging with significant utility for biological imaging. Existing LED array systems often exploit transmission imaging geometries of standard brightfield microscopes that leave the rich backscattered field undetected. This backscattered signal contains high-resolution sample information with superb sensitivity to subtle structural features that make it ideal for biological sensing and detection. Here, we develop an LED array reflectance microscope capturing the sample’s backscattered signal. In particular, we demonstrate multimodal brightfield, darkfield, and differential phase contrast imaging on fixed and living biological specimens includingCaenorhabditis elegans (C. elegans), zebrafish embryos, and live cell cultures. Video-rate multimodal imaging at 20 Hz records real time features of freely movingC. elegansand the fast beating heart of zebrafish embryos. Our new reflectance mode is a valuable addition to the LED array microscopic toolbox.more » « less
- 
            This study showcases the multifunctionality of a single-shot quantitative phase microscopy (QPM) system for comprehensive cell analysis. The system captures four high-contrast images in one shot, enabling tasks like cell segmentation, measuring cell confluence, and estimating cell mass. We demonstrate the usability of the QPM system in routine biological workflows, showing how its integration with computational algorithms enables automated, precise analysis, achieving accuracy scores between 85% and 97% across samples with varying cell densities, even those with low signal-to-noise ratios. This cost-effective tool operates under low-intensity light and resists vibrations, making it highly versatile for researchers in both optical and biological fields.more » « less
- 
            Across basic research studies, cell counting requires significant human time and expertise. Trained experts use thin focal plane scanning to count (click) cells in stained biological tissue. This computer-assisted process (optical disector) requires a well-trained human to select a unique best z-plane of focus for counting cells of interest. Though accurate, this approach typically requires an hour per case and is prone to inter-and intra-rater errors. Our group has previously proposed deep learning (DL)-based methods to automate these counts using cell segmentation at high magnification. Here we propose a novel You Only Look Once (YOLO) model that performs cell detection on multi-channel z-plane images (disector stack). This automated Multiple Input Multiple Output (MIMO) version of the optical disector method uses an entire z-stack of microscopy images as its input, and outputs cell detections (counts) with a bounding box of each cell and class corresponding to the z-plane where the cell appears in best focus. Compared to the previous segmentation methods, the proposed method does not require time-and labor-intensive ground truth segmentation masks for training, while producing comparable accuracy to current segmentation-based automatic counts. The MIMO-YOLO method was evaluated on systematic-random samples of NeuN-stained tissue sections through the neocortex of mouse brains (n=7). Using a cross validation scheme, this method showed the ability to correctly count total neuron numbers with accuracy close to human experts and with 100% repeatability (Test-Retest).more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    