skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Frequency-aware optical coherence tomography image super-resolution via conditional generative adversarial neural network
Optical coherence tomography (OCT) has stimulated a wide range of medical image-based diagnosis and treatment in fields such as cardiology and ophthalmology. Such applications can be further facilitated by deep learning-based super-resolution technology, which improves the capability of resolving morphological structures. However, existing deep learning-based method only focuses on spatial distribution and disregards frequency fidelity in image reconstruction, leading to a frequency bias. To overcome this limitation, we propose a frequency-aware super-resolution framework that integrates three critical frequency-based modules (i.e., frequency transformation, frequency skip connection, and frequency alignment) and frequency-based loss function into a conditional generative adversarial network (cGAN). We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks. In addition, we confirmed the generalizability of our framework by applying it to fish corneal images and rat retinal images, demonstrating its capability to super-resolve morphological details in eye imaging.  more » « less
Award ID(s):
2222739 2239810
PAR ID:
10461900
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Volume:
14
Issue:
10
ISSN:
2156-7085
Format(s):
Medium: X Size: Article No. 5148
Size(s):
Article No. 5148
Sponsoring Org:
National Science Foundation
More Like this
  1. Accurately assessing cell viability and morphological properties within 3D bioprinted hydrogel scaffolds is essential for tissue engineering but remains challenging due to the limitations of existing invasive and threshold-based methods. We present a computational toolbox that automates cell viability analysis and quantifies key properties such as elongation, flatness, and surface roughness. This framework integrates optical coherence tomography (OCT) with deep learning-based segmentation, achieving a mean segmentation precision of 88.96%. By leveraging OCT’s high-resolution imaging with deep learning-based segmentation, our novel approach enables non-invasive, quantitative analysis, which can advance rapid monitoring of 3D cell cultures for regenerative medicine and biomaterial research. 
    more » « less
  2. Supervised deep-learning models have enabled super-resolution imaging in several microscopic imaging modalities, increasing the spatial lateral bandwidth of the original input images beyond the diffraction limit. Despite their success, their practical application poses several challenges in terms of the amount of training data and its quality, requiring the experimental acquisition of large, paired databases to generate an accurate generalized model whose performance remains invariant to unseen data. Cycle-consistent generative adversarial networks (cycleGANs) are unsupervised models for image-to-image translation tasks that are trained on unpaired datasets. This paper introduces a cycleGAN framework specifically designed to increase the lateral resolution limit in confocal microscopy by training a cycleGAN model using low- and high-resolution unpaired confocal images of human glioblastoma cells. Training and testing performances of the cycleGAN model have been assessed by measuring specific metrics such as background standard deviation, peak-to-noise ratio, and a customized frequency content measure. Our cycleGAN model has been evaluated in terms of image fidelity and resolution improvement using a paired dataset, showing superior performance than other reported methods. This work highlights the efficacy and promise of cycleGAN models in tackling super-resolution microscopic imaging without paired training, paving the path for turning home-built low-resolution microscopic systems into low-cost super-resolution instruments by means of unsupervised deep learning. 
    more » « less
  3. Generative models learned from training using deep learning methods can be used as priors in under-determined inverse problems, including imaging from sparse set of measurements. In this paper, we present a novel hierarchical deep-generative model MrSARP for SAR imagery that can synthesize SAR images of a target at different resolutions jointly. MrSARP is trained in conjunction with a critic that scores multi resolution images jointly to decide if they are realistic images of a target at different resolutions. We show how this deep generative model can be used to retrieve the high spatial resolution image from low resolution images of the same target. The cost function of the generator is modified to improve its capability to retrieve the input parameters for a given set of resolution images. We evaluate the model's performance using three standard error metrics used for evaluating super-resolution performance on simulated data and compare it to upsampling and sparsity based image super-resolution approaches. 
    more » « less
  4. Extreme face super-resolution (FSR), that is, improving the resolution of face images by an extreme scaling factor (often greater than ×8) has remained underexplored in the literature of low-level vision. Extreme FSR in the wild must address the challenges of both unpaired training data and unknown degradation factors. Inspired by the latest advances in image super-resolution (SR) and self-supervised learning (SSL), we propose a novel two-step approach to FSR by introducing a mid-resolution (MR) image as the stepping stone. In the first step, we leverage ideas from SSL-based SR reconstruction of medical images (e.g., MRI and ultrasound) to modeling the realistic degradation process of face images in the real world; in the second step, we extract the latent codes from MR images and interpolate them in a self-supervised manner to facilitate artifact-suppressed image reconstruction. Our two-step extreme FSR can be interpreted as the combination of existing self-supervised CycleGAN (step 1) and StyleGAN (step 2) that overcomes the barrier of critical resolution in face recognition. Extensive experimental results have shown that our two-step approach can significantly outperform existing state-of-the-art FSR techniques, including FSRGAN, Bulat's method, and PULSE, especially for large scaling factors such as 64. 
    more » « less
  5. In optical coherence tomography (OCT), the axial resolution is often superior to the lateral resolution, which is sacrificed for long imaging depths. To address this anisotropy, we previously developed optical coherence refraction tomography (OCRT), which uses images from multiple angles to computationally reconstruct an image with isotropic resolution, given by the OCT axial resolution. On the other hand, spectroscopic OCT (SOCT), an extension of OCT, trades axial resolution for spectral resolution and hence often has superior lateral resolution. Here, we present spectroscopic OCRT (SOCRT), which uses SOCT images from multiple angles to reconstruct a spectroscopic image with isotropic spatial resolution limited by the OCTlateralresolution. We experimentally show that SOCRT can estimate bead size based on Mie theory at simultaneously high spectral and isotropic spatial resolution. We also applied SOCRT to a biological sample, achieving axial resolution enhancement limited by the lateral resolution. 
    more » « less