Objective and Impact Statement . We present a fully automated hematological analysis framework based on single-channel (single-wavelength), label-free deep-ultraviolet (UV) microscopy that serves as a fast, cost-effective alternative to conventional hematology analyzers. Introduction . Hematological analysis is essential for the diagnosis and monitoring of several diseases but requires complex systems operated by trained personnel, costly chemical reagents, and lengthy protocols. Label-free techniques eliminate the need for staining or additional preprocessing and can lead to faster analysis and a simpler workflow. In this work, we leverage the unique capabilities of deep-UV microscopy as a label-free, molecular imaging technique to develop a deep learning-based pipeline that enables virtual staining, segmentation, classification, and counting of white blood cells (WBCs) in single-channel images of peripheral blood smears. Methods . We train independent deep networks to virtually stain and segment grayscale images of smears. The segmented images are then used to train a classifier to yield a quantitative five-part WBC differential. Results. Our virtual staining scheme accurately recapitulates the appearance of cells under conventional Giemsa staining, the gold standard in hematology. The trained cellular and nuclear segmentation networks achieve high accuracy, and the classifier can achieve a quantitative five-part differential on unseen test data. Conclusion . This proposed automated hematology analysis framework could greatly simplify and improve current complete blood count and blood smear analysis and lead to the development of a simple, fast, and low-cost, point-of-care hematology analyzer.
more »
« less
Deep learning provides high accuracy in automated chondrocyte viability assessment in articular cartilage using nonlinear optical microscopy
Chondrocyte viability is a crucial factor in evaluating cartilage health. Most cell viability assays rely on dyes and are not applicable forin vivoor longitudinal studies. We previously demonstrated that two-photon excited autofluorescence and second harmonic generation microscopy provided high-resolution images of cells and collagen structure; those images allowed us to distinguish live from dead chondrocytes by visual assessment or by the normalized autofluorescence ratio. However, both methods require human involvement and have low throughputs. Methods for automated cell-based image processing can improve throughput. Conventional image processing algorithms do not perform well on autofluorescence images acquired by nonlinear microscopes due to low image contrast. In this study, we compared conventional, machine learning, and deep learning methods in chondrocyte segmentation and classification. We demonstrated that deep learning significantly improved the outcome of the chondrocyte segmentation and classification. With appropriate training, the deep learning method can achieve 90% accuracy in chondrocyte viability measurement. The significance of this work is that automated imaging analysis is possible and should not become a major hurdle for the use of nonlinear optical imaging methods in biological or clinical studies.
more »
« less
- Award ID(s):
- 1655740
- PAR ID:
- 10222103
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Biomedical Optics Express
- Volume:
- 12
- Issue:
- 5
- ISSN:
- 2156-7085
- Format(s):
- Medium: X Size: Article No. 2759
- Size(s):
- Article No. 2759
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
BackgroundWe performed a systematic review that identified at least 9,000 scientific papers on PubMed that include immunofluorescent images of cells from the central nervous system (CNS). These CNS papers contain tens of thousands of immunofluorescent neural images supporting the findings of over 50,000 associated researchers. While many existing reviews discuss different aspects of immunofluorescent microscopy, such as image acquisition and staining protocols, few papers discuss immunofluorescent imaging from an image-processing perspective. We analyzed the literature to determine the image processing methods that were commonly published alongside the associated CNS cell, microscopy technique, and animal model, and highlight gaps in image processing documentation and reporting in the CNS research field. MethodsWe completed a comprehensive search of PubMed publications using Medical Subject Headings (MeSH) terms and other general search terms for CNS cells and common fluorescent microscopy techniques. Publications were found on PubMed using a combination of column description terms and row description terms. We manually tagged the comma-separated values file (CSV) metadata of each publication with the following categories: animal or cell model, quantified features, threshold techniques, segmentation techniques, and image processing software. ResultsOf the almost 9,000 immunofluorescent imaging papers identified in our search, only 856 explicitly include image processing information. Moreover, hundreds of the 856 papers are missing thresholding, segmentation, and morphological feature details necessary for explainable, unbiased, and reproducible results. In our assessment of the literature, we visualized current image processing practices, compiled the image processing options from the top twelve software programs, and designed a road map to enhance image processing. We determined that thresholding and segmentation methods were often left out of publications and underreported or underutilized for quantifying CNS cell research. DiscussionLess than 10% of papers with immunofluorescent images include image processing in their methods. A few authors are implementing advanced methods in image analysis to quantify over 40 different CNS cell features, which can provide quantitative insights in CNS cell features that will advance CNS research. However, our review puts forward that image analysis methods will remain limited in rigor and reproducibility without more rigorous and detailed reporting of image processing methods. ConclusionImage processing is a critical part of CNS research that must be improved to increase scientific insight, explainability, reproducibility, and rigor.more » « less
-
Accurately assessing cell viability and morphological properties within 3D bioprinted hydrogel scaffolds is essential for tissue engineering but remains challenging due to the limitations of existing invasive and threshold-based methods. We present a computational toolbox that automates cell viability analysis and quantifies key properties such as elongation, flatness, and surface roughness. This framework integrates optical coherence tomography (OCT) with deep learning-based segmentation, achieving a mean segmentation precision of 88.96%. By leveraging OCT’s high-resolution imaging with deep learning-based segmentation, our novel approach enables non-invasive, quantitative analysis, which can advance rapid monitoring of 3D cell cultures for regenerative medicine and biomaterial research.more » « less
-
Segmentation of echocardiograms plays an essential role in the quantitative analysis of the heart and helps diagnose cardiac diseases. In the recent decade, deep learning-based approaches have significantly improved the performance of echocardiogram segmentation. Most deep learning-based methods assume that the image to be processed is rectangular in shape. However, typically echocardiogram images are formed within a sector of a circle, with a significant region in the overall rectangular image where there is no data, a result of the ultrasound imaging methodology. This large non-imaging region can influence the training of deep neural networks. In this paper, we propose to use polar transformation to help train deep learning algorithms. Using the r-θ transformation, a significant portion of the non-imaging background is removed, allowing the neural network to focus on the heart image. The segmentation model is trained on both x-y and r-θ images. During inference, the predictions from the x-y and r-θ images are combined using max-voting. We verify the efficacy of our method on the CAMUS dataset with a variety of segmentation networks, encoder networks, and loss functions. The experimental results demonstrate the effectiveness and versatility of our proposed method for improving the segmentation results.more » « less
-
Abstract Morphological profiling is a valuable tool in phenotypic drug discovery. The advent of high-throughput automated imaging has enabled the capturing of a wide range of morphological features of cells or organisms in response to perturbations at the single-cell resolution. Concurrently, significant advances in machine learning and deep learning, especially in computer vision, have led to substantial improvements in analyzing large-scale high-content images at high throughput. These efforts have facilitated understanding of compound mechanism of action, drug repurposing, characterization of cell morphodynamics under perturbation, and ultimately contributing to the development of novel therapeutics. In this review, we provide a comprehensive overview of the recent advances in the field of morphological profiling. We summarize the image profiling analysis workflow, survey a broad spectrum of analysis strategies encompassing feature engineering– and deep learning–based approaches, and introduce publicly available benchmark datasets. We place a particular emphasis on the application of deep learning in this pipeline, covering cell segmentation, image representation learning, and multimodal learning. Additionally, we illuminate the application of morphological profiling in phenotypic drug discovery and highlight potential challenges and opportunities in this field.more » « less
An official website of the United States government
