Scanning electron microscopy (SEM) techniques have been extensively performed to image and study bacterial cells with high-resolution images. Bacterial image segmentation in SEM images is an essential task to distinguish an object of interest and its specific region. These segmentation results can then be used to retrieve quantitative measures (e.g., cell length, area, cell density) for the accurate decision-making process of obtaining cellular objects. However, the complexity of the bacterial segmentation task is a barrier, as the intensity and texture of foreground and background are similar, and also, most clustered bacterial cells in images are partially overlapping with each other. The traditional approaches for identifying cell regions in microscopy images are labor intensive and heavily dependent on the professional knowledge of researchers. To mitigate the aforementioned challenges, in this study, we tested a U-Net-based semantic segmentation architecture followed by a post-processing step of morphological over-segmentation resolution to achieve accurate cell segmentation of SEM-acquired images of bacterial cells grown in a rotary culture system. The approach showed an 89.52% Dice similarity score on bacterial cell segmentation with lower segmentation error rates, validated over several cell overlapping object segmentation approaches with significant performance improvement.
more » « less- Award ID(s):
- 1920954
- PAR ID:
- 10537340
- Publisher / Repository:
- MDPI
- Date Published:
- Journal Name:
- Machine Learning and Knowledge Extraction
- Volume:
- 4
- Issue:
- 4
- ISSN:
- 2504-4990
- Page Range / eLocation ID:
- 1024 to 1041
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Tomaszewski, John E. ; Ward, Aaron D. (Ed.)Automatic cell quantification in microscopy images can accelerate biomedical research. There has been significant progress in the 3D segmentation of neurons in fluorescence microscopy. However, it remains a challenge in bright-field microscopy due to the low Signal-to-Noise Ratio and signals from out-of-focus neurons. Automatic neuron counting in bright-field z-stacks is often performed on Extended Depth of Field images or on only one thick focal plane image. However, resolving overlapping cells that are located at different z-depths is a challenge. The overlap can be resolved by counting every neuron in its best focus z-plane because of their separation on the z-axis. Unbiased stereology is the state-of-the-art for total cell number estimation. The segmentation boundary for cells is required in order to incorporate the unbiased counting rule for stereology application. Hence, we perform counting via segmentation. We propose to achieve neuron segmentation in the optimal focal plane by posing the binary segmentation task as a multi-class multi-label task. Also, we propose to efficiently use a 2D U-Net for inter-image feature learning in a Multiple Input Multiple Output system that poses a binary segmentation task as a multi-class multi-label segmentation task. We demonstrate the accuracy and efficiency of the MIMO approach using a bright-field microscopy z-stack dataset locally prepared by an expert. The proposed MIMO approach is also validated on a dataset from the Cell Tracking Challenge achieving comparable results to a compared method equipped with memory units. Our z-stack dataset is available atmore » « less
-
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.more » « less
-
The segment anything model (SAM) was released as a foundation model for image segmentation. The promptable segmentation model was trained by over 1 billion masks on 11M licensed and privacy-respecting images. The model supports zero-shot image segmentation with various seg- mentation prompts (e.g., points, boxes, masks). It makes the SAM attractive for medical image analysis, especially for digital pathology where the training data are rare. In this study, we eval- uate the zero-shot segmentation performance of SAM model on representative segmentation tasks on whole slide imaging (WSI), including (1) tumor segmentation, (2) non-tumor tissue segmen- tation, (3) cell nuclei segmentation. Core Results: The results suggest that the zero-shot SAM model achieves remarkable segmentation performance for large connected objects. However, it does not consistently achieve satisfying performance for dense instance object segmentation, even with 20 prompts (clicks/boxes) on each image. We also summarized the identified limitations for digital pathology: (1) image resolution, (2) multiple scales, (3) prompt selection, and (4) model fine-tuning. In the future, the few-shot fine-tuning with images from downstream pathological seg- mentation tasks might help the model to achieve better performance in dense object segmentation.more » « less
-
Myxococcus xanthus bacteria are a model system for understanding pattern formation and collective cell behaviors. When starving, cells aggregate into fruiting bodies to form metabolically inert spores. During predation, cells self-organize into traveling cell-density waves termed ripples. Both phase-contrast and fluorescence microscopy are used to observe these patterns but each has its limitations. Phase-contrast images have higher contrast, but the resulting image intensities lose their correlation with cell density. The intensities of fluorescence microscopy images, on the other hand, are well-correlated with cell density, enabling better segmentation of aggregates and better visualization of streaming patterns in between aggregates; however, fluorescence microscopy requires the engineering of cells to express fluorescent proteins and can be phototoxic to cells. To combine the advantages of both imaging methodologies, we develop a generative adversarial network that converts phase-contrast into synthesized fluorescent images. By including an additional histogram-equalized output to the state-of-the-art pix2pixHD algorithm, our model generates accurate images of aggregates and streams, enabling the estimation of aggregate positions and sizes, but with small shifts of their boundaries. Further training on ripple patterns enables accurate estimation of the rippling wavelength. Our methods are thus applicable for many other phenotypic behaviors and pattern formation studies.more » « less
-
Chromatin-sensitive partial wave spectroscopic (csPWS) microscopy offers a non-invasive glimpse into the mass density distribution of cellular structures at the nanoscale, leveraging the spectroscopic information. Such capability allows us to analyze the chromatin structure and organization and the global transcriptional state of the cell nuclei for the study of its role in carcinogenesis. Accurate segmentation of the nuclei in csPWS microscopy images is an essential step in isolating them for further analysis. However, manual segmentation is error-prone, biased, time-consuming, and laborious, resulting in disrupted nuclear boundaries with partial or over-segmentation. Here, we present an innovative deep-learning-driven approach to automate the accurate nuclei segmentation of label-free (without any exogenous fluorescent staining) live cell csPWS microscopy imaging data. Our approach, csPWS-seg, harnesses the convolutional neural networks-based U-Net model with an attention mechanism to automate the accurate cell nuclei segmentation of csPWS microscopy images. We leveraged the structural, physical, and biological differences between the cytoplasm, nucleus, and nuclear periphery to construct three distinct csPWS feature images for nucleus segmentation. Using these images of HCT116 cells, csPWS-seg achieved superior performance with a median intersection over union (IoU) of 0.80 and a Dice similarity coefficient (DSC) score of 0.89. The csPWS-seg outperformed the segmentation performance over several other commonly used deep learning-based segmentation models for biomedical imaging, such as U-Net, SE-U-Net, Mask R-CNN, and DeepLabV3+, marking a significant improvement in segmentation accuracy. Further, we analyzed the performance of our proposed model with four loss functions: binary cross-entropy loss, focal loss, Dice loss, and Jaccard loss separately, as well as a combination of all of these loss functions. The csPWS-seg with focal loss or a combination of these loss functions provided the same best results compared to other loss functions. The automatic and accurate nuclei segmentation offered by the csPWS-seg not only automates, accelerates, and streamlines csPWS data analysis but also enhances the reliability of subsequent chromatin analysis research, paving the way for more accurate diagnostics, treatment, and understanding of cellular mechanisms for carcinogenesis.