skip to main content


Title: Image3C, a multimodal image-based and label independent integrative method for single-cell analysis
Image-based cell classification has become a common tool to identify phenotypic changes in cell populations. However, this methodology is limited to organisms possessing well characterized species-specific reagents (e.g., antibodies) that allow cell identification, clustering and convolutional neural network (CNN) training. In the absence of such reagents, the power of image-based classification has remained mostly off-limits to many research organisms. We have developed an image-based classification methodology we named Image3C (Image-Cytometry Cell Classification) that does not require species-specific reagents nor pre-existing knowledge about the sample. Image3C combines image-based flow cytometry with an unbiased, high-throughput cell cluster pipeline and CNN integration. Image3C exploits intrinsic cellular features and non-species-specific dyes to perform de novo cell composition analysis and to detect changes in cellular composition between different conditions. Therefore, Image3C expands the use of imaged-based analyses of cell population composition to research organisms in which detailed cellular phenotypes are unknown or for which species-specific reagents are not available.  more » « less
Award ID(s):
1923372
NSF-PAR ID:
10287425
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
eLife
Volume:
10
ISSN:
2050-084X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The axon initial segment (AIS) is a highly regulated subcellular domain required for neuronal firing. Changes in the AIS protein composition and distribution are a form of structural plasticity, which powerfully regulates neuronal activity and may underlie several neuropsychiatric and neurodegenerative disorders. Despite its physiological and pathophysiological relevance, the signaling pathways mediating AIS protein distribution are still poorly studied. Here, we used confocal imaging and whole-cell patch clamp electrophysiology in primary hippocampal neurons to study how AIS protein composition and neuronal firing varied in response to selected kinase inhibitors targeting the AKT/GSK3 pathway, which has previously been shown to phosphorylate AIS proteins. Image-based features representing the cellular pattern distribution of the voltage-gated Na+ (Nav) channel, ankyrin G, βIV spectrin, and the cell-adhesion molecule neurofascin were analyzed, revealing βIV spectrin as the most sensitive AIS protein to AKT/GSK3 pathway inhibition. Within this pathway, inhibition of AKT by triciribine has the greatest effect on βIV spectrin localization to the AIS and its subcellular distribution within neurons, a phenotype that Support Vector Machine classification was able to accurately distinguish from control. Treatment with triciribine also resulted in increased excitability in primary hippocampal neurons. Thus, perturbations to signaling mechanisms within the AKT pathway contribute to changes in βIV spectrin distribution and neuronal firing that may be associated with neuropsychiatric and neurodegenerative disorders. 
    more » « less
  2. Abstract

    Insect populations are changing rapidly, and monitoring these changes is essential for understanding the causes and consequences of such shifts. However, large‐scale insect identification projects are time‐consuming and expensive when done solely by human identifiers. Machine learning offers a possible solution to help collect insect data quickly and efficiently.

    Here, we outline a methodology for training classification models to identify pitfall trap‐collected insects from image data and then apply the method to identify ground beetles (Carabidae). All beetles were collected by the National Ecological Observatory Network (NEON), a continental scale ecological monitoring project with sites across the United States. We describe the procedures for image collection, image data extraction, data preparation, and model training, and compare the performance of five machine learning algorithms and two classification methods (hierarchical vs. single‐level) identifying ground beetles from the species to subfamily level. All models were trained using pre‐extracted feature vectors, not raw image data. Our methodology allows for data to be extracted from multiple individuals within the same image thus enhancing time efficiency, utilizes relatively simple models that allow for direct assessment of model performance, and can be performed on relatively small datasets.

    The best performing algorithm, linear discriminant analysis (LDA), reached an accuracy of 84.6% at the species level when naively identifying species, which was further increased to >95% when classifications were limited by known local species pools. Model performance was negatively correlated with taxonomic specificity, with the LDA model reaching an accuracy of ~99% at the subfamily level. When classifying carabid species not included in the training dataset at higher taxonomic levels species, the models performed significantly better than if classifications were made randomly. We also observed greater performance when classifications were made using the hierarchical classification method compared to the single‐level classification method at higher taxonomic levels.

    The general methodology outlined here serves as a proof‐of‐concept for classifying pitfall trap‐collected organisms using machine learning algorithms, and the image data extraction methodology may be used for nonmachine learning uses. We propose that integration of machine learning in large‐scale identification pipelines will increase efficiency and lead to a greater flow of insect macroecological data, with the potential to be expanded for use with other noninsect taxa.

     
    more » « less
  3. null (Ed.)
    Detection and quantification of bacterial endotoxins is important in a range of health-related contexts, including during pharmaceutical manufacturing of therapeutic proteins and vaccines. Here we combine experimental measurements based on nematic liquid crystalline droplets and machine learning methods to show that it is possible to classify bacterial sources ( Escherichia coli , Pseudomonas aeruginosa , Salmonella minnesota ) and quantify concentration of endotoxin derived from all three bacterial species present in aqueous solution. The approach uses flow cytometry to quantify, in a high-throughput manner, changes in the internal ordering of micrometer-sized droplets of nematic 4-cyano-4′-pentylbiphenyl triggered by the endotoxins. The changes in internal ordering alter the intensities of light side-scattered (SSC, large-angle) and forward-scattered (FSC, small-angle) by the liquid crystal droplets. A convolutional neural network (Endonet) is trained using the large data sets generated by flow cytometry and shown to predict endotoxin source and concentration directly from the FSC/SSC scatter plots. By using saliency maps, we reveal how EndoNet captures subtle differences in scatter fields to enable classification of bacterial source and quantification of endotoxin concentration over a range that spans eight orders of magnitude (0.01 pg mL −1 to 1 μg mL −1 ). We attribute changes in scatter fields with bacterial origin of endotoxin, as detected by EndoNet, to the distinct molecular structures of the lipid A domains of the endotoxins derived from the three bacteria. Overall, we conclude that the combination of liquid crystal droplets and EndoNet provides the basis of a promising analytical approach for endotoxins that does not require use of complex biologically-derived reagents ( e.g. , Limulus amoebocyte lysate). 
    more » « less
  4. Abstract

    High‐throughput single‐cell cytometry technologies have significantly improved our understanding of cellular phenotypes to support translational research and the clinical diagnosis of hematological and immunological diseases. However, subjective and ad hoc manual gating analysis does not adequately handle the increasing volume and heterogeneity of cytometry data for optimal diagnosis. Prior work has shown that machine learning can be applied to classify cytometry samples effectively. However, many of the machine learning classification results are either difficult to interpret without using characteristics of cell populations to make the classification, or suboptimal due to the use of inaccurate cell population characteristics derived from gating boundaries. To date, little has been done to optimize both the gating boundaries and the diagnostic accuracy simultaneously. In this work, we describe a fully discriminative machine learning approach that can simultaneously learn feature representations (e.g., combinations of coordinates of gating boundaries) and classifier parameters for optimizing clinical diagnosis from cytometry measurements. The approach starts from an initial gating position and then refines the position of the gating boundaries by gradient descent until a set of globally‐optimized gates across different samples are achieved. The learning procedure is constrained by regularization terms encoding domain knowledge that encourage the algorithm to seek interpretable results. We evaluate the proposed approach using both simulated and real data, producing classification results on par with those generated via human expertise, in terms of both the positions of the gating boundaries and the diagnostic accuracy. © 2019 The Authors.Cytometry Part Apublished by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.

     
    more » « less
  5. Abstract Background

    In recent years, 3-dimensional (3D) spheroid models have become increasingly popular in scientific research as they provide a more physiologically relevant microenvironment that mimics in vivo conditions. The use of 3D spheroid assays has proven to be advantageous as it offers a better understanding of the cellular behavior, drug efficacy, and toxicity as compared to traditional 2-dimensional cell culture methods. However, the use of 3D spheroid assays is impeded by the absence of automated and user-friendly tools for spheroid image analysis, which adversely affects the reproducibility and throughput of these assays.

    Results

    To address these issues, we have developed a fully automated, web-based tool called SpheroScan, which uses the deep learning framework called Mask Regions with Convolutional Neural Networks (R-CNN) for image detection and segmentation. To develop a deep learning model that could be applied to spheroid images from a range of experimental conditions, we trained the model using spheroid images captured using IncuCyte Live-Cell Analysis System and a conventional microscope. Performance evaluation of the trained model using validation and test datasets shows promising results.

    Conclusion

    SpheroScan allows for easy analysis of large numbers of images and provides interactive visualization features for a more in-depth understanding of the data. Our tool represents a significant advancement in the analysis of spheroid images and will facilitate the widespread adoption of 3D spheroid models in scientific research. The source code and a detailed tutorial for SpheroScan are available at https://github.com/FunctionalUrology/SpheroScan.

     
    more » « less