Label-free cell classification is advantageous for supplying pristine cells for further use or examination, yet existing techniques frequently fall short in terms of specificity and speed. In this study, we address these limitations through the development of a novel machine learning framework, Multiplex Image Machine Learning (MIML). This architecture uniquely combines label-free cell images with biomechanical property data, harnessing the vast, often underutilized biophysical information intrinsic to each cell. By integrating both types of data, our model offers a holistic understanding of cellular properties, utilizing cell biomechanical information typically discarded in traditional machine learning models. This approach has led to a remarkable 98.3% accuracy in cell classification, a substantial improvement over models that rely solely on image data. MIML has been proven effective in classifying white blood cells and tumor cells, with potential for broader application due to its inherent flexibility and transfer learning capability. It is particularly effective for cells with similar morphology but distinct biomechanical properties. This innovative approach has significant implications across various fields, from advancing disease diagnostics to understanding cellular behavior.
more »
« less
IDCIA: Immunocytochemistry Dataset for Cellular Image Analysis
We present a new annotated microscopic cellular image dataset to improve the effectiveness of machine learning methods for cellular image analysis. Cell counting is an important step in cell analysis. Typically, domain experts manually count cells in a microscopic image. Automated cell counting can potentially eliminate this tedious, time-consuming process. However, a good, labeled dataset is required for training an accurate machine learning model. Our dataset includes microscopic images of cells, and for each image, the cell count and the location of individual cells. The data were collected as part of an ongoing study investigating the potential of electrical stimulation to modulate stem cell differentiation and possible applications for neural repair. Compared to existing publicly available datasets, our dataset has more images of cells stained with more variety of antibodies (protein components of immune responses against invaders) typically used for cell analysis. The experimental results on this dataset indicate that none of the five existing models under this study are able to achieve sufficiently accurate count to replace the manual methods. The dataset is available at https://figshare.com/articles/dataset/Dataset/21970604.
more »
« less
- Award ID(s):
- 2152117
- PAR ID:
- 10422833
- Date Published:
- Journal Name:
- MMSys '23: Proceedings of the 14th Conference on ACM Multimedia Systems
- Page Range / eLocation ID:
- 451 to 457
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Across basic research studies, cell counting requires significant human time and expertise. Trained experts use thin focal plane scanning to count (click) cells in stained biological tissue. This computer-assisted process (optical disector) requires a well-trained human to select a unique best z-plane of focus for counting cells of interest. Though accurate, this approach typically requires an hour per case and is prone to inter-and intra-rater errors. Our group has previously proposed deep learning (DL)-based methods to automate these counts using cell segmentation at high magnification. Here we propose a novel You Only Look Once (YOLO) model that performs cell detection on multi-channel z-plane images (disector stack). This automated Multiple Input Multiple Output (MIMO) version of the optical disector method uses an entire z-stack of microscopy images as its input, and outputs cell detections (counts) with a bounding box of each cell and class corresponding to the z-plane where the cell appears in best focus. Compared to the previous segmentation methods, the proposed method does not require time-and labor-intensive ground truth segmentation masks for training, while producing comparable accuracy to current segmentation-based automatic counts. The MIMO-YOLO method was evaluated on systematic-random samples of NeuN-stained tissue sections through the neocortex of mouse brains (n=7). Using a cross validation scheme, this method showed the ability to correctly count total neuron numbers with accuracy close to human experts and with 100% repeatability (Test-Retest).more » « less
-
Abstract— Recent advances show the wide-ranging applications of machine learning for solving multi-disciplinary problems in cancer cell growth detection, modeling cancer growths and treatments, etc. There is growing interests among the faculty and students at Clayton State University to study the applications of machine learning for medical imaging and propose new algorithms based on a recently funded NSF grant proposal in medical imaging, skin cancer detection, and associated smartphone apps and a web-based user-friendly diagnosis interface. We tested many available open-source ML algorithm-based software sets in Python as applied to medical image data processing, and modeling used to predict cancer growths and treatments. We study the use of ML concepts that promote efficient, accurate, secure computation over medical images, identifying and classifying cancer cells, and modeling the cancer cell growths. In this collaborative project with another university, we follow a holistic approach to data analysis leading to more efficient cancer detection based upon both cell analysis and image recognition. Here, we compare ML based software methods and analyze their detection accuracy. In addition, we acquire publicly available data of cancer cell image files and analyze using deep learning algorithms to detect benign and suspicious image samples. We apply the current pattern matching algorithms and study the available data with possible diagnosis of cancer types.more » « less
-
Recent advances show the wide-ranging applications of machine learning for solving multi-disciplinary problems in cancer cell growth detection, modeling cancer growths and treatments, etc. There is growing interests among the faculty and students at Clayton State University to study the applications of machine learning for medical imaging and propose new algorithms based on a recently funded NSF grant proposal in medical imaging, skin cancer detection, and associated smartphone apps and a web-based user-friendly diagnosis interface. We tested many available open-source ML algorithm-based software sets in Python as applied to medical image data processing, and modeling used to predict cancer growths and treatments. We study the use of ML concepts that promote efficient, accurate, secure computation over medical images, identifying and classifying cancer cells, and modeling the cancer cell growths. In this collaborative project with another university, we follow a holistic approach to data analysis leading to more efficient cancer detection based upon both cell analysis and image recognition. Here, we compare ML based software methods and analyze their detection accuracy. In addition, we acquire publicly available data of cancer cell image files and analyze using deep learning algorithms to detect benign and suspicious image samples. We apply the current pattern matching algorithms and study the available data with possible diagnosis of cancer types.more » « less
-
null (Ed.)Despite having widespread application in the biomedical sciences, flow cytometers have several limitations that prevent their application to point-of-care (POC) diagnostics in resource-limited environments. 3D printing provides a cost-effective approach to improve the accessibility of POC devices in resource-limited environments. Towards this goal, we introduce a 3D-printed imaging platform (3DPIP) capable of accurately counting particles and perform fluorescence microscopy. In our 3DPIP, captured microscopic images of particle flow are processed on a custom developed particle counter code to provide a particle count. This prototype uses a machine vision-based algorithm to identify particles from captured flow images and is flexible enough to allow for labeled and label-free particle counting. Additionally, the particle counter code returns particle coordinates with respect to time which can further be used to perform particle image velocimetry. These results can help estimate forces acting on particles, and identify and sort different types of cells/particles. We evaluated the performance of this prototype by counting 10 μm polystyrene particles diluted in deionized water at different concentrations and comparing the results with a commercial Beckman-Coulter Z2 particle counter. The 3DPIP can count particle concentrations down to ∼100 particles per mL with a standard deviation of ±20 particles, which is comparable to the results obtained on a commercial particle counter. Our platform produces accurate results at flow rates up to 9 mL h −1 for concentrations below 1000 particle per mL, while 5 mL h −1 produces accurate results above this concentration limit. Aside from performing flow-through experiments, our instrument is capable of performing static experiments that are comparable to a plate reader. In this configuration, our instrument is able to count between 10 and 250 cells per image, depending on the prepared concentration of bacteria samples ( Citrobacter freundii ; ATCC 8090). Overall, this platform represents a first step towards the development of an affordable fully 3D printable imaging flow cytometry instrument for use in resource-limited clinical environments.more » « less
An official website of the United States government

