Accurately assessing cell viability and morphological properties within 3D bioprinted hydrogel scaffolds is essential for tissue engineering but remains challenging due to the limitations of existing invasive and threshold-based methods. We present a computational toolbox that automates cell viability analysis and quantifies key properties such as elongation, flatness, and surface roughness. This framework integrates optical coherence tomography (OCT) with deep learning-based segmentation, achieving a mean segmentation precision of 88.96%. By leveraging OCT’s high-resolution imaging with deep learning-based segmentation, our novel approach enables non-invasive, quantitative analysis, which can advance rapid monitoring of 3D cell cultures for regenerative medicine and biomaterial research.
more »
« less
Frequency-aware optical coherence tomography image super-resolution via conditional generative adversarial neural network
Optical coherence tomography (OCT) has stimulated a wide range of medical image-based diagnosis and treatment in fields such as cardiology and ophthalmology. Such applications can be further facilitated by deep learning-based super-resolution technology, which improves the capability of resolving morphological structures. However, existing deep learning-based method only focuses on spatial distribution and disregards frequency fidelity in image reconstruction, leading to a frequency bias. To overcome this limitation, we propose a frequency-aware super-resolution framework that integrates three critical frequency-based modules (i.e., frequency transformation, frequency skip connection, and frequency alignment) and frequency-based loss function into a conditional generative adversarial network (cGAN). We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks. In addition, we confirmed the generalizability of our framework by applying it to fish corneal images and rat retinal images, demonstrating its capability to super-resolve morphological details in eye imaging.
more »
« less
- PAR ID:
- 10461900
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Biomedical Optics Express
- Volume:
- 14
- Issue:
- 10
- ISSN:
- 2156-7085
- Format(s):
- Medium: X Size: Article No. 5148
- Size(s):
- Article No. 5148
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Unpaired data training enables super-resolution confocal microscopy from low-resolution acquisitionsSupervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets. This paper introduces a cycleGAN framework specifically designed to increase the lateral resolution limit in confocal microscopy by training a cycleGAN model using low- and high-resolution unpaired confocal images of human glioblastoma cells. Training and testing performances of the cycleGAN model have been assessed by measuring specific metrics such as background standard deviation, peak-to-noise ratio, and a customized frequency content measure. Our cycleGAN model has been evaluated in terms of image fidelity and resolution improvement using a paired dataset, showing superior performance than other reported methods. This work highlights the efficacy and promise of cycleGAN models in tackling super-resolution microscopic imaging without paired training, paving the path for turning home-built low-resolution microscopic systems into low-cost super-resolution instruments by means of unsupervised deep learning.more » « less
-
Generative models learned from training using deep learning methods can be used as priors in under-determined inverse problems, including imaging from sparse set of measurements. In this paper, we present a novel hierarchical deep-generative model MrSARP for SAR imagery that can synthesize SAR images of a target at different resolutions jointly. MrSARP is trained in conjunction with a critic that scores multi resolution images jointly to decide if they are realistic images of a target at different resolutions. We show how this deep generative model can be used to retrieve the high spatial resolution image from low resolution images of the same target. The cost function of the generator is modified to improve its capability to retrieve the input parameters for a given set of resolution images. We evaluate the model's performance using three standard error metrics used for evaluating super-resolution performance on simulated data and compare it to upsampling and sparsity based image super-resolution approaches.more » « less
-
Optical coherence tomography (OCT) imaging enables high resolution visualization of sub-surface tissue microstructures. However, OCT image analysis using deep learning is hampered by limited diverse training data to meet performance requirements and high inference latency for real-time applications. To address these challenges, we developed Octascope, a lightweight domain-specific convolutional neural network (CNN) - based model designed for OCT image analysis. Octascope was pre-trained using a curriculum learning approach, which involves sequential training, first on natural images (ImageNet), then on OCT images from retinal, abdominal, and renal tissues, to progressively acquire transferable knowledge. This multi-domain pre-training enables Octascope to generalize across varied tissue types. In two downstream tasks, Octascope demonstrated notable improvements in predictive accuracy compared to alternative approaches. In the epidural tissue detection task, our method surpassed single-task learning with fine-tuning by 9.13% and OCT-specific transfer learning by 5.95% in accuracy. Octascope outperformed VGG16 and ResNet50 by 5.36% and 6.66% in a retinal diagnosis task, respectively. In comparison to a Transformer-based OCT foundation model - RETFound, Octascope delivered 2 to 4.4 times faster inference speed with slightly better predictive accuracies in both downstream tasks. Octascope represented a significant advancement for OCT image analysis by providing an effective balance between computational efficiency and diagnostic accuracy for real-time clinical applications.more » « less
-
Extreme face super-resolution (FSR), that is, improving the resolution of face images by an extreme scaling factor (often greater than ×8) has remained underexplored in the literature of low-level vision. Extreme FSR in the wild must address the challenges of both unpaired training data and unknown degradation factors. Inspired by the latest advances in image super-resolution (SR) and self-supervised learning (SSL), we propose a novel two-step approach to FSR by introducing a mid-resolution (MR) image as the stepping stone. In the first step, we leverage ideas from SSL-based SR reconstruction of medical images (e.g., MRI and ultrasound) to modeling the realistic degradation process of face images in the real world; in the second step, we extract the latent codes from MR images and interpolate them in a self-supervised manner to facilitate artifact-suppressed image reconstruction. Our two-step extreme FSR can be interpreted as the combination of existing self-supervised CycleGAN (step 1) and StyleGAN (step 2) that overcomes the barrier of critical resolution in face recognition. Extensive experimental results have shown that our two-step approach can significantly outperform existing state-of-the-art FSR techniques, including FSRGAN, Bulat's method, and PULSE, especially for large scaling factors such as 64.more » « less
An official website of the United States government
