- Award ID(s):
- 1926990
- PAR ID:
- 10478124
- Publisher / Repository:
- 10.1109/TNNLS.2022.3213407
- Date Published:
- Journal Name:
- IEEE Transactions on Neural Networks and Learning Systems
- ISSN:
- 2162-237X
- Page Range / eLocation ID:
- 1 to 20
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Across basic research studies, cell counting requires significant human time and expertise. Trained experts use thin focal plane scanning to count (click) cells in stained biological tissue. This computer-assisted process (optical disector) requires a well-trained human to select a unique best z-plane of focus for counting cells of interest. Though accurate, this approach typically requires an hour per case and is prone to inter-and intra-rater errors. Our group has previously proposed deep learning (DL)-based methods to automate these counts using cell segmentation at high magnification. Here we propose a novel You Only Look Once (YOLO) model that performs cell detection on multi-channel z-plane images (disector stack). This automated Multiple Input Multiple Output (MIMO) version of the optical disector method uses an entire z-stack of microscopy images as its input, and outputs cell detections (counts) with a bounding box of each cell and class corresponding to the z-plane where the cell appears in best focus. Compared to the previous segmentation methods, the proposed method does not require time-and labor-intensive ground truth segmentation masks for training, while producing comparable accuracy to current segmentation-based automatic counts. The MIMO-YOLO method was evaluated on systematic-random samples of NeuN-stained tissue sections through the neocortex of mouse brains (n=7). Using a cross validation scheme, this method showed the ability to correctly count total neuron numbers with accuracy close to human experts and with 100% repeatability (Test-Retest).more » « less
-
Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method.more » « less
-
Chromatin-sensitive partial wave spectroscopic (csPWS) microscopy offers a non-invasive glimpse into the mass density distribution of cellular structures at the nanoscale, leveraging the spectroscopic information. Such capability allows us to analyze the chromatin structure and organization and the global transcriptional state of the cell nuclei for the study of its role in carcinogenesis. Accurate segmentation of the nuclei in csPWS microscopy images is an essential step in isolating them for further analysis. However, manual segmentation is error-prone, biased, time-consuming, and laborious, resulting in disrupted nuclear boundaries with partial or over-segmentation. Here, we present an innovative deep-learning-driven approach to automate the accurate nuclei segmentation of label-free (without any exogenous fluorescent staining) live cell csPWS microscopy imaging data. Our approach, csPWS-seg, harnesses the convolutional neural networks-based U-Net model with an attention mechanism to automate the accurate cell nuclei segmentation of csPWS microscopy images. We leveraged the structural, physical, and biological differences between the cytoplasm, nucleus, and nuclear periphery to construct three distinct csPWS feature images for nucleus segmentation. Using these images of HCT116 cells, csPWS-seg achieved superior performance with a median intersection over union (IoU) of 0.80 and a Dice similarity coefficient (DSC) score of 0.89. The csPWS-seg outperformed the segmentation performance over several other commonly used deep learning-based segmentation models for biomedical imaging, such as U-Net, SE-U-Net, Mask R-CNN, and DeepLabV3+, marking a significant improvement in segmentation accuracy. Further, we analyzed the performance of our proposed model with four loss functions: binary cross-entropy loss, focal loss, Dice loss, and Jaccard loss separately, as well as a combination of all of these loss functions. The csPWS-seg with focal loss or a combination of these loss functions provided the same best results compared to other loss functions. The automatic and accurate nuclei segmentation offered by the csPWS-seg not only automates, accelerates, and streamlines csPWS data analysis but also enhances the reliability of subsequent chromatin analysis research, paving the way for more accurate diagnostics, treatment, and understanding of cellular mechanisms for carcinogenesis.
-
Abstract Motivation Morphological analyses with flatmount fluorescent images are essential to retinal pigment epithelial (RPE) aging studies and thus require accurate RPE cell segmentation. Although rapid technology advances in deep learning semantic segmentation have achieved great success in many biomedical research, the performance of these supervised learning methods for RPE cell segmentation is still limited by inadequate training data with high-quality annotations.
Results To address this problem, we develop a Self-Supervised Semantic Segmentation (S4) method that utilizes a self-supervised learning strategy to train a semantic segmentation network with an encoder–decoder architecture. We employ a reconstruction and a pairwise representation loss to make the encoder extract structural information, while we create a morphology loss to produce the segmentation map. In addition, we develop a novel image augmentation algorithm (AugCut) to produce multiple views for self-supervised learning and enhance the network training performance. To validate the efficacy of our method, we applied our developed S4 method for RPE cell segmentation to a large set of flatmount fluorescent microscopy images, we compare our developed method for RPE cell segmentation with other state-of-the-art deep learning approaches. Compared with other state-of-the-art deep learning approaches, our method demonstrates better performance in both qualitative and quantitative evaluations, suggesting its promising potential to support large-scale cell morphological analyses in RPE aging investigations.
Availability and implementation The codes and the documentation are available at: https://github.com/jkonglab/S4_RPE.
-
Time lapse microscopy is essential for quantifying the dynamics of cells, subcellular organelles and biomolecules. Biologists use different fluorescent tags to label and track the subcellular structures and biomolecules within cells. However, not all of them are compatible with time lapse imaging, and the labeling itself can perturb the cells in undesirable ways. We hypothesized that phase image has the requisite information to identify and track nuclei within cells. By utilizing both traditional blob detection to generate binary mask labels from the stained channel images and the deep learning Mask RCNN model to train a detection and segmentation model, we managed to segment nuclei based only on phase images. The detection average precision is 0.82 when the IoU threshold is to be set 0.5. And the mean IoU for masks generated from phase images and ground truth masks from experts is 0.735. Without any ground truth mask labels during the training time, this is good enough to prove our hypothesis. This result enables the ability to detect nuclei without the need for exogenous labeling.more » « less