skip to main content


Title: Prevalence and practices of immunofluorescent cell image processing: a systematic review
Background

We performed a systematic review that identified at least 9,000 scientific papers on PubMed that include immunofluorescent images of cells from the central nervous system (CNS). These CNS papers contain tens of thousands of immunofluorescent neural images supporting the findings of over 50,000 associated researchers. While many existing reviews discuss different aspects of immunofluorescent microscopy, such as image acquisition and staining protocols, few papers discuss immunofluorescent imaging from an image-processing perspective. We analyzed the literature to determine the image processing methods that were commonly published alongside the associated CNS cell, microscopy technique, and animal model, and highlight gaps in image processing documentation and reporting in the CNS research field.

Methods

We completed a comprehensive search of PubMed publications using Medical Subject Headings (MeSH) terms and other general search terms for CNS cells and common fluorescent microscopy techniques. Publications were found on PubMed using a combination of column description terms and row description terms. We manually tagged the comma-separated values file (CSV) metadata of each publication with the following categories: animal or cell model, quantified features, threshold techniques, segmentation techniques, and image processing software.

Results

Of the almost 9,000 immunofluorescent imaging papers identified in our search, only 856 explicitly include image processing information. Moreover, hundreds of the 856 papers are missing thresholding, segmentation, and morphological feature details necessary for explainable, unbiased, and reproducible results. In our assessment of the literature, we visualized current image processing practices, compiled the image processing options from the top twelve software programs, and designed a road map to enhance image processing. We determined that thresholding and segmentation methods were often left out of publications and underreported or underutilized for quantifying CNS cell research.

Discussion

Less than 10% of papers with immunofluorescent images include image processing in their methods. A few authors are implementing advanced methods in image analysis to quantify over 40 different CNS cell features, which can provide quantitative insights in CNS cell features that will advance CNS research. However, our review puts forward that image analysis methods will remain limited in rigor and reproducibility without more rigorous and detailed reporting of image processing methods.

Conclusion

Image processing is a critical part of CNS research that must be improved to increase scientific insight, explainability, reproducibility, and rigor.

 
more » « less
Award ID(s):
1934292
NSF-PAR ID:
10482705
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
PubMed
Date Published:
Journal Name:
Frontiers in Cellular Neuroscience
Volume:
17
ISSN:
1662-5102
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Motivation

    Morphological analyses with flatmount fluorescent images are essential to retinal pigment epithelial (RPE) aging studies and thus require accurate RPE cell segmentation. Although rapid technology advances in deep learning semantic segmentation have achieved great success in many biomedical research, the performance of these supervised learning methods for RPE cell segmentation is still limited by inadequate training data with high-quality annotations.

    Results

    To address this problem, we develop a Self-Supervised Semantic Segmentation (S4) method that utilizes a self-supervised learning strategy to train a semantic segmentation network with an encoder–decoder architecture. We employ a reconstruction and a pairwise representation loss to make the encoder extract structural information, while we create a morphology loss to produce the segmentation map. In addition, we develop a novel image augmentation algorithm (AugCut) to produce multiple views for self-supervised learning and enhance the network training performance. To validate the efficacy of our method, we applied our developed S4 method for RPE cell segmentation to a large set of flatmount fluorescent microscopy images, we compare our developed method for RPE cell segmentation with other state-of-the-art deep learning approaches. Compared with other state-of-the-art deep learning approaches, our method demonstrates better performance in both qualitative and quantitative evaluations, suggesting its promising potential to support large-scale cell morphological analyses in RPE aging investigations.

    Availability and implementation

    The codes and the documentation are available at: https://github.com/jkonglab/S4_RPE.

     
    more » « less
  2. Scanning electron microscopy (SEM) techniques have been extensively performed to image and study bacterial cells with high-resolution images. Bacterial image segmentation in SEM images is an essential task to distinguish an object of interest and its specific region. These segmentation results can then be used to retrieve quantitative measures (e.g., cell length, area, cell density) for the accurate decision-making process of obtaining cellular objects. However, the complexity of the bacterial segmentation task is a barrier, as the intensity and texture of foreground and background are similar, and also, most clustered bacterial cells in images are partially overlapping with each other. The traditional approaches for identifying cell regions in microscopy images are labor intensive and heavily dependent on the professional knowledge of researchers. To mitigate the aforementioned challenges, in this study, we tested a U-Net-based semantic segmentation architecture followed by a post-processing step of morphological over-segmentation resolution to achieve accurate cell segmentation of SEM-acquired images of bacterial cells grown in a rotary culture system. The approach showed an 89.52% Dice similarity score on bacterial cell segmentation with lower segmentation error rates, validated over several cell overlapping object segmentation approaches with significant performance improvement.

     
    more » « less
  3. Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However, for scientists wishing to publish obtained images and image-analysis results, there are currently no unified guidelines for best practices. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here, we present community-developed checklists for preparing light microscopy images and describing image analyses for publications. These checklists offer authors, readers and publishers key recommendations for image formatting and annotation, color selection, data availability and reporting image-analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby to heighten the quality and explanatory power of microscopy data. 
    more » « less
  4. Chondrocyte viability is a crucial factor in evaluating cartilage health. Most cell viability assays rely on dyes and are not applicable forin vivoor longitudinal studies. We previously demonstrated that two-photon excited autofluorescence and second harmonic generation microscopy provided high-resolution images of cells and collagen structure; those images allowed us to distinguish live from dead chondrocytes by visual assessment or by the normalized autofluorescence ratio. However, both methods require human involvement and have low throughputs. Methods for automated cell-based image processing can improve throughput. Conventional image processing algorithms do not perform well on autofluorescence images acquired by nonlinear microscopes due to low image contrast. In this study, we compared conventional, machine learning, and deep learning methods in chondrocyte segmentation and classification. We demonstrated that deep learning significantly improved the outcome of the chondrocyte segmentation and classification. With appropriate training, the deep learning method can achieve 90% accuracy in chondrocyte viability measurement. The significance of this work is that automated imaging analysis is possible and should not become a major hurdle for the use of nonlinear optical imaging methods in biological or clinical studies.

     
    more » « less
  5. Abstract Background

    Magnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment‐based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal‐to‐noise, contrast‐to‐noise) and segmentation accuracy.

    Purpose

    Deep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL‐based brain tumor segmentation accuracy toward developing more generalizable models for multi‐institutional data.

    Methods

    We trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non‐ET on MRI; with performance quantified via a 5‐fold cross‐validated Dice coefficient. MRI scans were evaluated through the open‐source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as “better” quality (BQ) or “worse” quality (WQ), via relative thresholding. Segmentation performance was re‐evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts.

    Results

    For this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal‐to‐noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models.

    Conclusions

    Our results suggest that a significant correlation may exist between specific MR IQMs and DenseNet‐based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.

     
    more » « less