Across basic research studies, cell counting requires significant human time and expertise. Trained experts use thin focal plane scanning to count (click) cells in stained biological tissue. This computer-assisted process (optical disector) requires a well-trained human to select a unique best z-plane of focus for counting cells of interest. Though accurate, this approach typically requires an hour per case and is prone to inter-and intra-rater errors. Our group has previously proposed deep learning (DL)-based methods to automate these counts using cell segmentation at high magnification. Here we propose a novel You Only Look Once (YOLO) model that performs cell detection on multi-channel z-plane images (disector stack). This automated Multiple Input Multiple Output (MIMO) version of the optical disector method uses an entire z-stack of microscopy images as its input, and outputs cell detections (counts) with a bounding box of each cell and class corresponding to the z-plane where the cell appears in best focus. Compared to the previous segmentation methods, the proposed method does not require time-and labor-intensive ground truth segmentation masks for training, while producing comparable accuracy to current segmentation-based automatic counts. The MIMO-YOLO method was evaluated on systematic-random samples of NeuN-stained tissue sections through the neocortex of mouse brains (n=7). Using a cross validation scheme, this method showed the ability to correctly count total neuron numbers with accuracy close to human experts and with 100% repeatability (Test-Retest).
more »
« less
IDCIA: Immunocytochemistry Dataset for Cellular Image Analysis
We present a new annotated microscopic cellular image dataset to improve the effectiveness of machine learning methods for cellular image analysis. Cell counting is an important step in cell analysis. Typically, domain experts manually count cells in a microscopic image. Automated cell counting can potentially eliminate this tedious, time-consuming process. However, a good, labeled dataset is required for training an accurate machine learning model. Our dataset includes microscopic images of cells, and for each image, the cell count and the location of individual cells. The data were collected as part of an ongoing study investigating the potential of electrical stimulation to modulate stem cell differentiation and possible applications for neural repair. Compared to existing publicly available datasets, our dataset has more images of cells stained with more variety of antibodies (protein components of immune responses against invaders) typically used for cell analysis. The experimental results on this dataset indicate that none of the five existing models under this study are able to achieve sufficiently accurate count to replace the manual methods. The dataset is available at https://figshare.com/articles/dataset/Dataset/21970604.
more »
« less
- Award ID(s):
- 2152117
- PAR ID:
- 10422833
- Date Published:
- Journal Name:
- MMSys '23: Proceedings of the 14th Conference on ACM Multimedia Systems
- Page Range / eLocation ID:
- 451 to 457
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract— Recent advances show the wide-ranging applications of machine learning for solving multi-disciplinary problems in cancer cell growth detection, modeling cancer growths and treatments, etc. There is growing interests among the faculty and students at Clayton State University to study the applications of machine learning for medical imaging and propose new algorithms based on a recently funded NSF grant proposal in medical imaging, skin cancer detection, and associated smartphone apps and a web-based user-friendly diagnosis interface. We tested many available open-source ML algorithm-based software sets in Python as applied to medical image data processing, and modeling used to predict cancer growths and treatments. We study the use of ML concepts that promote efficient, accurate, secure computation over medical images, identifying and classifying cancer cells, and modeling the cancer cell growths. In this collaborative project with another university, we follow a holistic approach to data analysis leading to more efficient cancer detection based upon both cell analysis and image recognition. Here, we compare ML based software methods and analyze their detection accuracy. In addition, we acquire publicly available data of cancer cell image files and analyze using deep learning algorithms to detect benign and suspicious image samples. We apply the current pattern matching algorithms and study the available data with possible diagnosis of cancer types.more » « less
-
null (Ed.)Despite having widespread application in the biomedical sciences, flow cytometers have several limitations that prevent their application to point-of-care (POC) diagnostics in resource-limited environments. 3D printing provides a cost-effective approach to improve the accessibility of POC devices in resource-limited environments. Towards this goal, we introduce a 3D-printed imaging platform (3DPIP) capable of accurately counting particles and perform fluorescence microscopy. In our 3DPIP, captured microscopic images of particle flow are processed on a custom developed particle counter code to provide a particle count. This prototype uses a machine vision-based algorithm to identify particles from captured flow images and is flexible enough to allow for labeled and label-free particle counting. Additionally, the particle counter code returns particle coordinates with respect to time which can further be used to perform particle image velocimetry. These results can help estimate forces acting on particles, and identify and sort different types of cells/particles. We evaluated the performance of this prototype by counting 10 μm polystyrene particles diluted in deionized water at different concentrations and comparing the results with a commercial Beckman-Coulter Z2 particle counter. The 3DPIP can count particle concentrations down to ∼100 particles per mL with a standard deviation of ±20 particles, which is comparable to the results obtained on a commercial particle counter. Our platform produces accurate results at flow rates up to 9 mL h −1 for concentrations below 1000 particle per mL, while 5 mL h −1 produces accurate results above this concentration limit. Aside from performing flow-through experiments, our instrument is capable of performing static experiments that are comparable to a plate reader. In this configuration, our instrument is able to count between 10 and 250 cells per image, depending on the prepared concentration of bacteria samples ( Citrobacter freundii ; ATCC 8090). Overall, this platform represents a first step towards the development of an affordable fully 3D printable imaging flow cytometry instrument for use in resource-limited clinical environments.more » « less
-
Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method.more » « less
-
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.more » « less
An official website of the United States government

