skip to main content


This content will become publicly available on July 20, 2024

Title: Prevalence and practices of immunofluorescent cell image processing: a systematic review
Background

We performed a systematic review that identified at least 9,000 scientific papers on PubMed that include immunofluorescent images of cells from the central nervous system (CNS). These CNS papers contain tens of thousands of immunofluorescent neural images supporting the findings of over 50,000 associated researchers. While many existing reviews discuss different aspects of immunofluorescent microscopy, such as image acquisition and staining protocols, few papers discuss immunofluorescent imaging from an image-processing perspective. We analyzed the literature to determine the image processing methods that were commonly published alongside the associated CNS cell, microscopy technique, and animal model, and highlight gaps in image processing documentation and reporting in the CNS research field.

Methods

We completed a comprehensive search of PubMed publications using Medical Subject Headings (MeSH) terms and other general search terms for CNS cells and common fluorescent microscopy techniques. Publications were found on PubMed using a combination of column description terms and row description terms. We manually tagged the comma-separated values file (CSV) metadata of each publication with the following categories: animal or cell model, quantified features, threshold techniques, segmentation techniques, and image processing software.

Results

Of the almost 9,000 immunofluorescent imaging papers identified in our search, only 856 explicitly include image processing information. Moreover, hundreds of the 856 papers are missing thresholding, segmentation, and morphological feature details necessary for explainable, unbiased, and reproducible results. In our assessment of the literature, we visualized current image processing practices, compiled the image processing options from the top twelve software programs, and designed a road map to enhance image processing. We determined that thresholding and segmentation methods were often left out of publications and underreported or underutilized for quantifying CNS cell research.

Discussion

Less than 10% of papers with immunofluorescent images include image processing in their methods. A few authors are implementing advanced methods in image analysis to quantify over 40 different CNS cell features, which can provide quantitative insights in CNS cell features that will advance CNS research. However, our review puts forward that image analysis methods will remain limited in rigor and reproducibility without more rigorous and detailed reporting of image processing methods.

Conclusion

Image processing is a critical part of CNS research that must be improved to increase scientific insight, explainability, reproducibility, and rigor.

 
more » « less
Award ID(s):
1934292
NSF-PAR ID:
10482705
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
PubMed
Date Published:
Journal Name:
Frontiers in Cellular Neuroscience
Volume:
17
ISSN:
1662-5102
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Introduction

    Traction force microscopy (TFM) is a widely used technique to measure cell contractility on compliant substrates that mimic the stiffness of human tissues. For every step in a TFM workflow, users make choices which impact the quantitative results, yet many times the rationales and consequences for making these decisions are unclear. We have found few papers which show the complete experimental and mathematical steps of TFM, thus obfuscating the full effects of these decisions on the final output.

    Methods

    Therefore, we present this “Field Guide” with the goal to explain the mathematical basis of common TFM methods to practitioners in an accessible way. We specifically focus on how errors propagate in TFM workflows given specific experimental design and analytical choices.

    Results

    We cover important assumptions and considerations in TFM substrate manufacturing, substrate mechanical properties, imaging techniques, image processing methods, approaches and parameters used in calculating traction stress, and data-reporting strategies.

    Conclusions

    By presenting a conceptual review and analysis of TFM-focused research articles published over the last two decades, we provide researchers in the field with a better understanding of their options to make more informed choices when creating TFM workflows depending on the type of cell being studied. With this review, we aim to empower experimentalists to quantify cell contractility with confidence.

     
    more » « less
  2. Measuring the organization of the cellular cytoskeleton and the surrounding extracellular matrix (ECM) is currently of wide interest as changes in both local and global alignment can highlight alterations in cellular functions and material properties of the extracellular environment. Different approaches have been developed to quantify these structures, typically based on fiber segmentation or on matrix representation and transformation of the image, each with its own advantages and disadvantages. Here we present AFT − Alignment by Fourier Transform , a workflow to quantify the alignment of fibrillar features in microscopy images exploiting 2D Fast Fourier Transforms (FFT). Using pre-existing datasets of cell and ECM images, we demonstrate our approach and compare and contrast this workflow with two other well-known ImageJ algorithms to quantify image feature alignment. These comparisons reveal that AFT has a number of advantages due to its grid-based FFT approach. 1) Flexibility in defining the window and neighborhood sizes allows for performing a parameter search to determine an optimal length scale to carry out alignment metrics. This approach can thus easily accommodate different image resolutions and biological systems. 2) The length scale of decay in alignment can be extracted by comparing neighborhood sizes, revealing the overall distance that features remain anisotropic. 3) The approach is ambivalent to the signal source, thus making it applicable for a wide range of imaging modalities and is dependent on fewer input parameters than segmentation methods. 4) Finally, compared to segmentation methods, this algorithm is computationally inexpensive, as high-resolution images can be evaluated in less than a second on a standard desktop computer. This makes it feasible to screen numerous experimental perturbations or examine large images over long length scales. Implementation is made available in both MATLAB and Python for wider accessibility, with example datasets for single images and batch processing. Additionally, we include an approach to automatically search parameters for optimum window and neighborhood sizes, as well as to measure the decay in alignment over progressively increasing length scales. 
    more » « less
  3. Abstract Background

    In recent years, 3-dimensional (3D) spheroid models have become increasingly popular in scientific research as they provide a more physiologically relevant microenvironment that mimics in vivo conditions. The use of 3D spheroid assays has proven to be advantageous as it offers a better understanding of the cellular behavior, drug efficacy, and toxicity as compared to traditional 2-dimensional cell culture methods. However, the use of 3D spheroid assays is impeded by the absence of automated and user-friendly tools for spheroid image analysis, which adversely affects the reproducibility and throughput of these assays.

    Results

    To address these issues, we have developed a fully automated, web-based tool called SpheroScan, which uses the deep learning framework called Mask Regions with Convolutional Neural Networks (R-CNN) for image detection and segmentation. To develop a deep learning model that could be applied to spheroid images from a range of experimental conditions, we trained the model using spheroid images captured using IncuCyte Live-Cell Analysis System and a conventional microscope. Performance evaluation of the trained model using validation and test datasets shows promising results.

    Conclusion

    SpheroScan allows for easy analysis of large numbers of images and provides interactive visualization features for a more in-depth understanding of the data. Our tool represents a significant advancement in the analysis of spheroid images and will facilitate the widespread adoption of 3D spheroid models in scientific research. The source code and a detailed tutorial for SpheroScan are available at https://github.com/FunctionalUrology/SpheroScan.

     
    more » « less
  4. Abstract Motivation

    Morphological analyses with flatmount fluorescent images are essential to retinal pigment epithelial (RPE) aging studies and thus require accurate RPE cell segmentation. Although rapid technology advances in deep learning semantic segmentation have achieved great success in many biomedical research, the performance of these supervised learning methods for RPE cell segmentation is still limited by inadequate training data with high-quality annotations.

    Results

    To address this problem, we develop a Self-Supervised Semantic Segmentation (S4) method that utilizes a self-supervised learning strategy to train a semantic segmentation network with an encoder–decoder architecture. We employ a reconstruction and a pairwise representation loss to make the encoder extract structural information, while we create a morphology loss to produce the segmentation map. In addition, we develop a novel image augmentation algorithm (AugCut) to produce multiple views for self-supervised learning and enhance the network training performance. To validate the efficacy of our method, we applied our developed S4 method for RPE cell segmentation to a large set of flatmount fluorescent microscopy images, we compare our developed method for RPE cell segmentation with other state-of-the-art deep learning approaches. Compared with other state-of-the-art deep learning approaches, our method demonstrates better performance in both qualitative and quantitative evaluations, suggesting its promising potential to support large-scale cell morphological analyses in RPE aging investigations.

    Availability and implementation

    The codes and the documentation are available at: https://github.com/jkonglab/S4_RPE.

     
    more » « less
  5. Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However, for scientists wishing to publish obtained images and image-analysis results, there are currently no unified guidelines for best practices. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here, we present community-developed checklists for preparing light microscopy images and describing image analyses for publications. These checklists offer authors, readers and publishers key recommendations for image formatting and annotation, color selection, data availability and reporting image-analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby to heighten the quality and explanatory power of microscopy data. 
    more » « less