Chondrocyte viability is a crucial factor in evaluating cartilage health. Most cell viability assays rely on dyes and are not applicable forin vivoor longitudinal studies. We previously demonstrated that two-photon excited autofluorescence and second harmonic generation microscopy provided high-resolution images of cells and collagen structure; those images allowed us to distinguish live from dead chondrocytes by visual assessment or by the normalized autofluorescence ratio. However, both methods require human involvement and have low throughputs. Methods for automated cell-based image processing can improve throughput. Conventional image processing algorithms do not perform well on autofluorescence images acquired by nonlinear microscopes due to low image contrast. In this study, we compared conventional, machine learning, and deep learning methods in chondrocyte segmentation and classification. We demonstrated that deep learning significantly improved the outcome of the chondrocyte segmentation and classification. With appropriate training, the deep learning method can achieve 90% accuracy in chondrocyte viability measurement. The significance of this work is that automated imaging analysis is possible and should not become a major hurdle for the use of nonlinear optical imaging methods in biological or clinical studies. 
                        more » 
                        « less   
                    
                            
                            Automated cell properties toolbox from 3D bioprinted hydrogel scaffolds via deep learning and optical coherence tomography
                        
                    
    
            Accurately assessing cell viability and morphological properties within 3D bioprinted hydrogel scaffolds is essential for tissue engineering but remains challenging due to the limitations of existing invasive and threshold-based methods. We present a computational toolbox that automates cell viability analysis and quantifies key properties such as elongation, flatness, and surface roughness. This framework integrates optical coherence tomography (OCT) with deep learning-based segmentation, achieving a mean segmentation precision of 88.96%. By leveraging OCT’s high-resolution imaging with deep learning-based segmentation, our novel approach enables non-invasive, quantitative analysis, which can advance rapid monitoring of 3D cell cultures for regenerative medicine and biomaterial research. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2239810
- PAR ID:
- 10585961
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Biomedical Optics Express
- Volume:
- 16
- Issue:
- 5
- ISSN:
- 2156-7085
- Format(s):
- Medium: X Size: Article No. 2061
- Size(s):
- Article No. 2061
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Ascertaining the collective viability of cells in different cell culture conditions has typically relied on averaging colorimetric indicators and is often reported out in simple binary readouts. Recent research has combined viability assessment techniques with image-based deep-learning models to automate the characterization of cellular properties. However, further development of viability measurements to assess the continuity of possible cellular states and responses to perturbation across cell culture conditions is needed. In this work, we demonstrate an image processing algorithm for quantifying features associated with cellular viability in 3D cultures without the need for assay-based indicators. We show that our algorithm performs similarly to a pair of human experts in whole-well images over a range of days and culture matrix compositions. To demonstrate potential utility, we perform a longitudinal study investigating the impact of a known therapeutic on pancreatic cancer spheroids. Using images taken with a high content imaging system, the algorithm successfully tracks viability at the individual spheroid and whole-well level. The method we propose reduces analysis time by 97% in comparison with the experts. Because the method is independent of the microscope or imaging system used, this approach lays the foundation for accelerating progress in and for improving the robustness and reproducibility of 3D culture analysis across biological and clinical research.more » « less
- 
            ABSTRACT Cochlear hair cell stereocilia bundles are key organelles required for normal hearing. Often, deafness mutations cause aberrant stereocilia heights or morphology that are visually apparent but challenging to quantify. Actin-based structures, stereocilia are easily and most often labeled with phalloidin then imaged with 3D confocal microscopy. Unfortunately, phalloidin non-specifically labels all the actin in the tissue and cells and therefore results in a challenging segmentation task wherein the stereocilia phalloidin signal must be separated from the rest of the tissue. This can require many hours of manual human effort for each 3D confocal image stack. Currently, there are no existing software pipelines that provide an end-to-end automated solution for 3D stereocilia bundle instance segmentation. Here we introduce VASCilia, a Napari plugin designed to automatically generate 3D instance segmentation and analysis of 3D confocal images of cochlear hair cell stereocilia bundles stained with phalloidin. This plugin combines user-friendly manual controls with advanced deep learning-based features to streamline analyses. With VASCilia, users can begin their analysis by loading image stacks. The software automatically preprocesses these samples and displays them in Napari. At this stage, users can select their desired range of z-slices, adjust their orientation, and initiate 3D instance segmentation. After segmentation, users can remove any undesired regions and obtain measurements including volume, centroids, and surface area. VASCilia introduces unique features that measures bundle heights, determines their orientation with respect to planar polarity axis, and quantifies the fluorescence intensity within each bundle. The plugin is also equipped with trained deep learning models that differentiate between inner hair cells and outer hair cells and predicts their tonotopic position within the cochlea spiral. Additionally, the plugin includes a training section that allows other laboratories to fine-tune our model with their own data, provides responsive mechanisms for manual corrections through event-handlers that check user actions, and allows users to share their analyses by uploading a pickle file containing all intermediate results. We believe this software will become a valuable resource for the cochlea research community, which has traditionally lacked specialized deep learning-based tools for obtaining high-throughput image quantitation. Furthermore, we plan to release our code along with a manually annotated dataset that includes approximately 55 3D stacks featuring instance segmentation. This dataset comprises a total of 1,870 instances of hair cells, distributed between 410 inner hair cells and 1,460 outer hair cells, all annotated in 3D. As the first open-source dataset of its kind, we aim to establish a foundational resource for constructing a comprehensive atlas of cochlea hair cell images. Together, this open-source tool will greatly accelerate the analysis of stereocilia bundles and demonstrates the power of deep learning-based algorithms for challenging segmentation tasks in biological imaging research. Ultimately, this initiative will support the development of foundational models adaptable to various species, markers, and imaging scales to advance and accelerate research within the cochlea research community.more » « less
- 
            The ability to evaluate sperm at the microscopic level, at high-throughput, would be useful for assisted reproductive technologies (ARTs), as it can allow specific selection of sperm cells for in vitro fertilization (IVF). The tradeoff between intrinsic imaging and external contrast agents is particularly acute in reproductive medicine. The use of fluorescence labels has enabled new cell-sorting strategies and given new insights into developmental biology. Nevertheless, using extrinsic contrast agents is often too invasive for routine clinical operation. Raising questions about cell viability, especially for single-cell selection, clinicians prefer intrinsic contrast in the form of phase-contrast, differential-interference contrast, or Hoffman modulation contrast. While such instruments are nondestructive, the resulting image suffers from a lack of specificity. In this work, we provide a template to circumvent the tradeoff between cell viability and specificity by combining high-sensitivity phase imaging with deep learning. In order to introduce specificity to label-free images, we trained a deep-convolutional neural network to perform semantic segmentation on quantitative phase maps. This approach, a form of phase imaging with computational specificity (PICS), allowed us to efficiently analyze thousands of sperm cells and identify correlations between dry-mass content and artificial-reproduction outcomes. Specifically, we found that the dry-mass content ratios between the head, midpiece, and tail of the cells can predict the percentages of success for zygote cleavage and embryo blastocyst formation.more » « less
- 
            3D imaging of porous materials in polymer electrolyte membrane (PEM)-based devices, coupled with in situ diagnostics and advanced multi-scale modelling approaches, is pivotal to deciphering the interplay of mass transport phenomena, performance, and durability. The characterization of porous electrode media in PEM-based cells encompassing gas diffusion layers and catalyst layers often relies on traditional analytical techniques such as 2D scanning electron microscopy, followed by image processing such as Otsu thresholding and manual annotation. These methods lack the 3D context needed to capture the complex physical properties of porous electrode media, while also struggling to accurately and effectively discriminate porous and solid domains. To achieve an enhanced, automated segmentation of porous structures, we present a 3D deep learning-based approach trained on calibrated 3D micro-CT, focused ion beam-scanning electron microscopy datasets, and data from physical porosity measurements. Our approach includes binary segmentation for porous layers and a multiclass segmentation method to distinguish the microporous layers from the gas diffusion layers. The presented analysis framework integrates functions for pore size distribution, porosity, permeability, and tortuosity simulation analyses from the resulting binary masks and enables quantitative correlation assessments. Segmentations achieved can be interactively visualized on-site in a 3D environment.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
