Scanning electron microscopy (SEM) techniques have been extensively performed to image and study bacterial cells with high-resolution images. Bacterial image segmentation in SEM images is an essential task to distinguish an object of interest and its specific region. These segmentation results can then be used to retrieve quantitative measures (e.g., cell length, area, cell density) for the accurate decision-making process of obtaining cellular objects. However, the complexity of the bacterial segmentation task is a barrier, as the intensity and texture of foreground and background are similar, and also, most clustered bacterial cells in images are partially overlapping with each other. The traditional approaches for identifying cell regions in microscopy images are labor intensive and heavily dependent on the professional knowledge of researchers. To mitigate the aforementioned challenges, in this study, we tested a U-Net-based semantic segmentation architecture followed by a post-processing step of morphological over-segmentation resolution to achieve accurate cell segmentation of SEM-acquired images of bacterial cells grown in a rotary culture system. The approach showed an 89.52% Dice similarity score on bacterial cell segmentation with lower segmentation error rates, validated over several cell overlapping object segmentation approaches with significant performance improvement.
more »
« less
Cell segmentation using stable extremal regions in multi-exposure microscopy images
We propose a novel cell segmentation approach by extracting Multi-exposure Maximally Stable Extremal Regions (MMSER) in phase contrast microscopy images on the same cell dish. Using our method, cell regions can be well identified by considering the maximally stable regions with response to different camera exposure times. Meanwhile, halo artifacts with regard to cells at different stages are leveraged to identify cells' stages. The experimental results validate that high quality cell segmentation and cell stage classification can be achieved by our approach.
more »
« less
- Award ID(s):
- 1355406
- PAR ID:
- 10023446
- Date Published:
- Journal Name:
- Proceedings (International Symposium on Biomedical Imaging)
- Page Range / eLocation ID:
- 526 to 530
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Tomaszewski, John E.; Ward, Aaron D. (Ed.)Automatic cell quantification in microscopy images can accelerate biomedical research. There has been significant progress in the 3D segmentation of neurons in fluorescence microscopy. However, it remains a challenge in bright-field microscopy due to the low Signal-to-Noise Ratio and signals from out-of-focus neurons. Automatic neuron counting in bright-field z-stacks is often performed on Extended Depth of Field images or on only one thick focal plane image. However, resolving overlapping cells that are located at different z-depths is a challenge. The overlap can be resolved by counting every neuron in its best focus z-plane because of their separation on the z-axis. Unbiased stereology is the state-of-the-art for total cell number estimation. The segmentation boundary for cells is required in order to incorporate the unbiased counting rule for stereology application. Hence, we perform counting via segmentation. We propose to achieve neuron segmentation in the optimal focal plane by posing the binary segmentation task as a multi-class multi-label task. Also, we propose to efficiently use a 2D U-Net for inter-image feature learning in a Multiple Input Multiple Output system that poses a binary segmentation task as a multi-class multi-label segmentation task. We demonstrate the accuracy and efficiency of the MIMO approach using a bright-field microscopy z-stack dataset locally prepared by an expert. The proposed MIMO approach is also validated on a dataset from the Cell Tracking Challenge achieving comparable results to a compared method equipped with memory units. Our z-stack dataset is available atmore » « less
-
Abstract We present NoodlePrint, a generalized computational framework for maximally concurrent layer-wise cooperative 3D printing (C3DP) of arbitrary part geometries with multiple robots. NoodlePrint is inspired by a recently discovered set of helically interlocked space-filling shapes called VoroNoodles. Leveraging this unique geometric relationship, we introduce an algorithmic pipeline for generating helically interlocked cellular segmentation of arbitrary parts followed by layer-wise cell sequencing and path planning for cooperative 3D printing. Furthermore, we introduce a novel concurrence measure that quantifies the amount of printing parallelization across multiple robots. Consequently, we integrate this measure to optimize the location and orientation of a part for maximally parallel printing. We systematically study the relationship between the helix parameters (i.e., cellular interlocking), the cell size, the amount of concurrent printing, and the total printing time. Our study revealed that both concurrence and time to print primarily depend on the cell size, thereby allowing the determination of interlocking independent of time to print. To demonstrate the generality of our approach with respect to part geometry and the number of robots, we implemented two cooperative 3D printing systems with two and three printing robots and printed a variety of part geometries. Through comparative bending and tensile tests, we show that helically interlocked part segmentation is robust to gaps between segments.more » « less
-
Abstract Timelapse microscopy has recently been employed to study the metabolism and physiology of cyanobacteria at the single-cell level. However, the identification of individual cells in brightfield images remains a significant challenge. Traditional intensity-based segmentation algorithms perform poorly when identifying individual cells in dense colonies due to a lack of contrast between neighboring cells. Here, we describe a newly developed software package called Cypose which uses machine learning (ML) models to solve two specific tasks: segmentation of individual cyanobacterial cells, and classification of cellular phenotypes. The segmentation models are based on the Cellpose framework, while classification is performed using a convolutional neural network named Cyclass. To our knowledge, these are the first developed ML-based models for cyanobacteria segmentation and classification. When compared to other methods, our segmentation models showed improved performance and were able to segment cells with varied morphological phenotypes, as well as differentiate between live and lysed cells. We also found that our models were robust to imaging artifacts, such as dust and cell debris. Additionally, the classification model was able to identify different cellular phenotypes using only images as input. Together, these models improve cell segmentation accuracy and enable high-throughput analysis of dense cyanobacterial colonies and filamentous cyanobacteria.more » « less
-
ABSTRACT Cochlear hair cell stereocilia bundles are key organelles required for normal hearing. Often, deafness mutations cause aberrant stereocilia heights or morphology that are visually apparent but challenging to quantify. Actin-based structures, stereocilia are easily and most often labeled with phalloidin then imaged with 3D confocal microscopy. Unfortunately, phalloidin non-specifically labels all the actin in the tissue and cells and therefore results in a challenging segmentation task wherein the stereocilia phalloidin signal must be separated from the rest of the tissue. This can require many hours of manual human effort for each 3D confocal image stack. Currently, there are no existing software pipelines that provide an end-to-end automated solution for 3D stereocilia bundle instance segmentation. Here we introduce VASCilia, a Napari plugin designed to automatically generate 3D instance segmentation and analysis of 3D confocal images of cochlear hair cell stereocilia bundles stained with phalloidin. This plugin combines user-friendly manual controls with advanced deep learning-based features to streamline analyses. With VASCilia, users can begin their analysis by loading image stacks. The software automatically preprocesses these samples and displays them in Napari. At this stage, users can select their desired range of z-slices, adjust their orientation, and initiate 3D instance segmentation. After segmentation, users can remove any undesired regions and obtain measurements including volume, centroids, and surface area. VASCilia introduces unique features that measures bundle heights, determines their orientation with respect to planar polarity axis, and quantifies the fluorescence intensity within each bundle. The plugin is also equipped with trained deep learning models that differentiate between inner hair cells and outer hair cells and predicts their tonotopic position within the cochlea spiral. Additionally, the plugin includes a training section that allows other laboratories to fine-tune our model with their own data, provides responsive mechanisms for manual corrections through event-handlers that check user actions, and allows users to share their analyses by uploading a pickle file containing all intermediate results. We believe this software will become a valuable resource for the cochlea research community, which has traditionally lacked specialized deep learning-based tools for obtaining high-throughput image quantitation. Furthermore, we plan to release our code along with a manually annotated dataset that includes approximately 55 3D stacks featuring instance segmentation. This dataset comprises a total of 1,870 instances of hair cells, distributed between 410 inner hair cells and 1,460 outer hair cells, all annotated in 3D. As the first open-source dataset of its kind, we aim to establish a foundational resource for constructing a comprehensive atlas of cochlea hair cell images. Together, this open-source tool will greatly accelerate the analysis of stereocilia bundles and demonstrates the power of deep learning-based algorithms for challenging segmentation tasks in biological imaging research. Ultimately, this initiative will support the development of foundational models adaptable to various species, markers, and imaging scales to advance and accelerate research within the cochlea research community.more » « less
An official website of the United States government

