Abstract This paper presents a method for time-lapse 3D cell analysis. Specifically, we consider the problem of accurately localizing and quantitatively analyzing sub-cellular features, and for tracking individual cells from time-lapse 3D confocal cell image stacks. The heterogeneity of cells and the volume of multi-dimensional images presents a major challenge for fully automated analysis of morphogenesis and development of cells. This paper is motivated by the pavement cell growth process, and building a quantitative morphogenesis model. We propose a deep feature based segmentation method to accurately detect and label each cell region. An adjacency graph based method is used to extract sub-cellular features of the segmented cells. Finally, the robust graph based tracking algorithm using multiple cell features is proposed for associating cells at different time instances. We also demonstrate the generality of our tracking method on C. elegans fluorescent nuclei imagery. Extensive experiment results are provided and demonstrate the robustness of the proposed method. The code is available on and the method is available as a service through the BisQue portal.
more »
« less
Scale Selection and Machine Learning-based Cell Segmentation and Tracking in Time Lapse Microscopy
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques.
more »
« less
- Award ID(s):
- 2234871
- PAR ID:
- 10617280
- Publisher / Repository:
- Research Square
- Date Published:
- Format(s):
- Medium: X
- Institution:
- Research Square
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We propose a Hierarchical Convolution Neural Network (HCNN) for mitosis event detection in time-lapse phase contrast microscopy. Our method contains two stages: first,we extract candidate spatial-temporal patch sequences in the input image sequences which potentially contain mitosis events. Then,we identify if each patch sequence contains mitosis event or not using a hieratical convolutional neural network. In the experiments,we validate the design of our proposed architecture and evaluate the mitosis event detection performance. Our method achieves 99.1% precision and 97.2% recall in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells and outperforms other state-of-the-art methods. Furthermore,the proposed method does not depend on hand-crafted feature design or cell tracking. It can be straightforwardly adapted to event detection of other different cell types.more » « less
-
Among the non-invasive Colorectal cancer (CRC) screening approaches, Computed Tomography Colonography (CTC) and Virtual Colonoscopy (VC), are much more accurate. This work proposes an AI-based polyp detection framework for virtual colonoscopy (VC). Two main steps are addressed in this work: automatic segmentation to isolate the colon region from its background, and automatic polyp detection. Moreover, we evaluate the performance of the proposed framework on low-dose Computed Tomography (CT) scans. We build on our visualization approach, Fly-In (FI), which provides “filet”-like projections of the internal surface of the colon. The performance of the Fly-In approach confirms its ability with helping gastroenterologists, and it holds a great promise for combating CRC. In this work, these 2D projections of FI are fused with the 3D colon representation to generate new synthetic images. The synthetic images are used to train a RetinaNet model to detect polyps. The trained model has a 94% f1-score and 97% sensitivity. Furthermore, we study the effect of dose variation in CT scans on the performance of the the FI approach in polyp visualization. A simulation platform is developed for CTC visualization using FI, for regular CTC and low-dose CTC. This is accomplished using a novel AI restoration algorithm that enhances the Low-Dose CT images so that a 3D colon can be successfully reconstructed and visualized using the FI approach. Three senior board-certified radiologists evaluated the framework for the peak voltages of 30 KV, and the average relative sensitivities of the platform were 92%, whereas the 60 KV peak voltage produced average relative sensitivities of 99.5%.more » « less
-
null (Ed.)We propose a novel weakly supervised method to improve the boundary of the 3D segmented nuclei utilizing an oversegmented image. This is motivated by the observation that current state-of-the-art deep learning methods do not result in accurate boundaries when the training data is weakly annotated. Towards this, a 3D U-Net is trained to get the centroid of the nuclei and integrated with a simple linear iterative clustering (SLIC) supervoxel algorithm that provides better adherence to cluster boundaries. To track these segmented nuclei, our algorithm utilizes the relative nuclei location depicting the processes of nuclei division and apoptosis. The proposed algorithmic pipeline achieves better segmentation performance compared to the state-of-the-art method in Cell Tracking Challenge (CTC) 2019 and comparable performance to state-of-the-art methods in IEEE ISBI CTC2020 while utilizing very few pixel-wise annotated data. Detailed experimental results are provided, and the source code is available on GitHub.more » « less
-
ABSTRACT Microfluidic devices (MDs) present a novel method for detecting circulating tumor cells (CTCs), enhancing the process through targeted techniques and visual inspection. However, current approaches often yield heterogeneous CTC populations, necessitating additional processing for comprehensive analysis and phenotype identification. These procedures are often expensive, time‐consuming, and need to be performed by skilled technicians. In this study, we investigate the potential of a cost‐effective and efficient hyperuniform micropost MD approach for CTC classification. Our approach combines mathematical modeling of fluid–structure interactions in a simulated microfluidic channel with machine learning techniques. Specifically, we developed a cell‐based modeling framework to assess CTC dynamics in erythrocyte‐laden plasma flow, generating a large dataset of CTC trajectories that account for two distinct CTC phenotypes. Convolutional neural network (CNN) and recurrent neural network (RNN) were then employed to analyze the dataset and classify these phenotypes. The results demonstrate the potential effectiveness of the hyperuniform micropost MD design and analysis approach in distinguishing between different CTC phenotypes based on cell trajectory, offering a promising avenue for early cancer detection.more » « less
An official website of the United States government

