skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: PrestoCell: A persistence-based clustering approach for rapid and robust segmentation of cellular morphology in three-dimensional data
Light microscopy methods have continued to advance allowing for unprecedented analysis of various cell types in tissues including the brain. Although the functional state of some cell types such as microglia can be determined by morphometric analysis, techniques to perform robust, quick, and accurate measurements have not kept pace with the amount of imaging data that can now be generated. Most of these image segmentation tools are further burdened by an inability to assess structures in three-dimensions. Despite the rise of machine learning techniques, the nature of some biological structures prevents the training of several current day implementations. Here we present PrestoCell, a novel use of persistence-based clustering to segment cells in light microscopy images, as a customized Python-based tool that leverages the free multidimensional image viewer Napari. In evaluating and comparing PrestoCell to several existing tools, including 3DMorph, Omipose, and Imaris, we demonstrate that PrestoCell produces image segmentations that rival these solutions. In particular, our use of cell nuclei information resulted in the ability to correctly segment individual cells that were interacting with one another to increase accuracy. These benefits are in addition to the simplified graphically based user refinement of cell masks that does not require expensive commercial software licenses. We further demonstrate that PrestoCell can complete image segmentation in large samples from light sheet microscopy, allowing quantitative analysis of these large datasets. As an open-source program that leverages freely available visualization software, with minimum computer requirements, we believe that PrestoCell can significantly increase the ability of users without data or computer science expertise to perform complex image analysis.  more » « less
Award ID(s):
1934568
PAR ID:
10555605
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Neueder, Andreas
Publisher / Repository:
PLOS
Date Published:
Journal Name:
PLOS ONE
Volume:
19
Issue:
2
ISSN:
1932-6203
Page Range / eLocation ID:
e0299006
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Timelapse microscopy has recently been employed to study the metabolism and physiology of cyanobacteria at the single-cell level. However, the identification of individual cells in brightfield images remains a significant challenge. Traditional intensity-based segmentation algorithms perform poorly when identifying individual cells in dense colonies due to a lack of contrast between neighboring cells. Here, we describe a newly developed software package called Cypose which uses machine learning (ML) models to solve two specific tasks: segmentation of individual cyanobacterial cells, and classification of cellular phenotypes. The segmentation models are based on the Cellpose framework, while classification is performed using a convolutional neural network named Cyclass. To our knowledge, these are the first developed ML-based models for cyanobacteria segmentation and classification. When compared to other methods, our segmentation models showed improved performance and were able to segment cells with varied morphological phenotypes, as well as differentiate between live and lysed cells. We also found that our models were robust to imaging artifacts, such as dust and cell debris. Additionally, the classification model was able to identify different cellular phenotypes using only images as input. Together, these models improve cell segmentation accuracy and enable high-throughput analysis of dense cyanobacterial colonies and filamentous cyanobacteria. 
    more » « less
  2. BackgroundWe performed a systematic review that identified at least 9,000 scientific papers on PubMed that include immunofluorescent images of cells from the central nervous system (CNS). These CNS papers contain tens of thousands of immunofluorescent neural images supporting the findings of over 50,000 associated researchers. While many existing reviews discuss different aspects of immunofluorescent microscopy, such as image acquisition and staining protocols, few papers discuss immunofluorescent imaging from an image-processing perspective. We analyzed the literature to determine the image processing methods that were commonly published alongside the associated CNS cell, microscopy technique, and animal model, and highlight gaps in image processing documentation and reporting in the CNS research field. MethodsWe completed a comprehensive search of PubMed publications using Medical Subject Headings (MeSH) terms and other general search terms for CNS cells and common fluorescent microscopy techniques. Publications were found on PubMed using a combination of column description terms and row description terms. We manually tagged the comma-separated values file (CSV) metadata of each publication with the following categories: animal or cell model, quantified features, threshold techniques, segmentation techniques, and image processing software. ResultsOf the almost 9,000 immunofluorescent imaging papers identified in our search, only 856 explicitly include image processing information. Moreover, hundreds of the 856 papers are missing thresholding, segmentation, and morphological feature details necessary for explainable, unbiased, and reproducible results. In our assessment of the literature, we visualized current image processing practices, compiled the image processing options from the top twelve software programs, and designed a road map to enhance image processing. We determined that thresholding and segmentation methods were often left out of publications and underreported or underutilized for quantifying CNS cell research. DiscussionLess than 10% of papers with immunofluorescent images include image processing in their methods. A few authors are implementing advanced methods in image analysis to quantify over 40 different CNS cell features, which can provide quantitative insights in CNS cell features that will advance CNS research. However, our review puts forward that image analysis methods will remain limited in rigor and reproducibility without more rigorous and detailed reporting of image processing methods. ConclusionImage processing is a critical part of CNS research that must be improved to increase scientific insight, explainability, reproducibility, and rigor. 
    more » « less
  3. Fluorescently labeled proteins absorb and emit light, appearing as Gaussian spots in fluorescence imaging. When fluorescent tags are added to cytoskeletal polymers such as microtubules, a line of fluorescence and even non-linear structures results. While much progress has been made in techniques for imaging and microscopy, image analysis is less well-developed. Current analysis of fluorescent microtubules uses either manual tools, such as kymographs, or automated software. As a result, our ability to quantify microtubule dynamics and organization from light microscopy remains limited. Despite the development of automated microtubule analysis tools for in vitro studies, analysis of images from cells often depends heavily on manual analysis. One of the main reasons for this disparity is the low signal-to-noise ratio in cells, where background fluorescence is typically higher than in reconstituted systems. Here, we present the Toolkit for Automated Microtubule Tracking (TAMiT), which automatically detects, optimizes, and tracks fluorescent microtubules in living yeast cells with sub-pixel accuracy. Using basic information about microtubule organization, TAMiT detects linear and curved polymers using a geometrical scanning technique. Images are fit via an optimization problem for the microtubule image parameters that are solved using non-linear least squares in Matlab. We benchmark our software using simulated images and show that it reliably detects microtubules, even at low signal-to-noise ratios. Then, we use TAMiT to measure monopolar spindle microtubule bundle number, length, and lifetime in a large dataset that includes several S. pombe mutants that affect microtubule dynamics and bundling. The results from the automated analysis are consistent with previous work and suggest a direct role for CLASP/Cls1 in bundling spindle microtubules. We also illustrate automated tracking of single curved astral microtubules in S. cerevisiae, with measurement of dynamic instability parameters. The results obtained with our fully-automated software are similar to results using hand-tracked measurements. Therefore, TAMiT can facilitate automated analysis of spindle and microtubule dynamics in yeast cells. 
    more » « less
  4. Abstract Computational modeling of cardiovascular function has become a critical part of diagnosing, treating and understanding cardiovascular disease. Most strategies involve constructing anatomically accurate computer models of cardiovascular structures, which is a multistep, time-consuming process. To improve the model generation process, we herein present SeqSeg (sequential segmentation): a novel deep learning-based automatic tracing and segmentation algorithm for constructing image-based vascular models. SeqSeg leverages local U-Net-based inference to sequentially segment vascular structures from medical image volumes. We tested SeqSeg on CT and MR images of aortic and aortofemoral models and compared the predictions to those of benchmark 2D and 3D global nnU-Net models, which have previously shown excellent accuracy for medical image segmentation. We demonstrate that SeqSeg is able to segment more complete vasculature and is able to generalize to vascular structures not annotated in the training data. 
    more » « less
  5. ABSTRACT Cochlear hair cell stereocilia bundles are key organelles required for normal hearing. Often, deafness mutations cause aberrant stereocilia heights or morphology that are visually apparent but challenging to quantify. Actin-based structures, stereocilia are easily and most often labeled with phalloidin then imaged with 3D confocal microscopy. Unfortunately, phalloidin non-specifically labels all the actin in the tissue and cells and therefore results in a challenging segmentation task wherein the stereocilia phalloidin signal must be separated from the rest of the tissue. This can require many hours of manual human effort for each 3D confocal image stack. Currently, there are no existing software pipelines that provide an end-to-end automated solution for 3D stereocilia bundle instance segmentation. Here we introduce VASCilia, a Napari plugin designed to automatically generate 3D instance segmentation and analysis of 3D confocal images of cochlear hair cell stereocilia bundles stained with phalloidin. This plugin combines user-friendly manual controls with advanced deep learning-based features to streamline analyses. With VASCilia, users can begin their analysis by loading image stacks. The software automatically preprocesses these samples and displays them in Napari. At this stage, users can select their desired range of z-slices, adjust their orientation, and initiate 3D instance segmentation. After segmentation, users can remove any undesired regions and obtain measurements including volume, centroids, and surface area. VASCilia introduces unique features that measures bundle heights, determines their orientation with respect to planar polarity axis, and quantifies the fluorescence intensity within each bundle. The plugin is also equipped with trained deep learning models that differentiate between inner hair cells and outer hair cells and predicts their tonotopic position within the cochlea spiral. Additionally, the plugin includes a training section that allows other laboratories to fine-tune our model with their own data, provides responsive mechanisms for manual corrections through event-handlers that check user actions, and allows users to share their analyses by uploading a pickle file containing all intermediate results. We believe this software will become a valuable resource for the cochlea research community, which has traditionally lacked specialized deep learning-based tools for obtaining high-throughput image quantitation. Furthermore, we plan to release our code along with a manually annotated dataset that includes approximately 55 3D stacks featuring instance segmentation. This dataset comprises a total of 1,870 instances of hair cells, distributed between 410 inner hair cells and 1,460 outer hair cells, all annotated in 3D. As the first open-source dataset of its kind, we aim to establish a foundational resource for constructing a comprehensive atlas of cochlea hair cell images. Together, this open-source tool will greatly accelerate the analysis of stereocilia bundles and demonstrates the power of deep learning-based algorithms for challenging segmentation tasks in biological imaging research. Ultimately, this initiative will support the development of foundational models adaptable to various species, markers, and imaging scales to advance and accelerate research within the cochlea research community. 
    more » « less