skip to main content


Title: Deep learning-based super-resolution in coherent imaging systems
Abstract

We present a deep learning framework based on a generative adversarial network (GAN) to perform super-resolution in coherent imaging systems. We demonstrate that this framework can enhance the resolution of both pixel size-limited and diffraction-limited coherent imaging systems. The capabilities of this approach are experimentally validated by super-resolving complex-valued images acquired using a lensfree on-chip holographic microscope, the resolution of which was pixel size-limited. Using the same GAN-based approach, we also improved the resolution of a lens-based holographic imaging system that was limited in resolution by the numerical aperture of its objective lens. This deep learning-based super-resolution framework can be broadly applied to enhance the space-bandwidth product of coherent imaging systems using image data and convolutional neural networks, and provides a rapid, non-iterative method for solving inverse image reconstruction or enhancement problems in optics.

 
more » « less
NSF-PAR ID:
10153621
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
9
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram. However, unlike a conventional bright-field microscopy image, the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects. Here, we demonstrate that cross-modality deep learning using a generative adversarial network (GAN) can endow holographic images of a sample volume with bright-field microscopy contrast, combining the volumetric imaging capability of holography with the speckle- and artifact-free image contrast of incoherent bright-field microscopy. We illustrate the performance of this “bright-field holography” method through the snapshot imaging of bioaerosols distributed in 3D, matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope. This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging, and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram, benefiting from the wave-propagation framework of holography.

     
    more » « less
  2. Abstract

    In the field of optical imaging, the ability to image tumors at depth with high selectivity and specificity remains a challenge. Surface enhanced resonance Raman scattering (SERRS) nanoparticles (NPs) can be employed as image contrast agents to specifically target cells in vivo; however, this technique typically requires time-intensive point-by-point acquisition of Raman spectra. Here, we combine the use of “spatially offset Raman spectroscopy” (SORS) with that of SERRS in a technique known as “surface enhanced spatially offset resonance Raman spectroscopy” (SESORRS) to image deep-seated tumors in vivo. Additionally, by accounting for the laser spot size, we report an experimental approach for detecting both the bulk tumor, subsequent delineation of tumor margins at high speed, and the identification of a deeper secondary region of interest with fewer measurements than are typically applied. To enhance light collection efficiency, four modifications were made to a previously described custom-built SORS system. Specifically, the following parameters were increased: (i) the numerical aperture (NA) of the lens, from 0.2 to 0.34; (ii) the working distance of the probe, from 9 mm to 40 mm; (iii) the NA of the fiber, from 0.2 to 0.34; and (iv) the fiber diameter, from 100 µm to 400 µm. To calculate the sampling frequency, which refers to the number of data point spectra obtained for each image, we considered the laser spot size of the elliptical beam (6 × 4 mm). Using SERRS contrast agents, we performed in vivo SESORRS imaging on a GL261-Luc mouse model of glioblastoma at four distinct sampling frequencies: par-sampling frequency (12 data points collected), and over-frequency sampling by factors of 2 (35 data points collected), 5 (176 data points collected), and 10 (651 data points collected). In comparison to the previously reported SORS system, the modified SORS instrument showed a 300% improvement in signal-to-noise ratios (SNR). The results demonstrate the ability to acquire distinct Raman spectra from deep-seated glioblastomas in mice through the skull using a low power density (6.5 mW/mm2) and 30-times shorter integration times than a previous report (0.5 s versus 15 s). The ability to map the whole head of the mouse and determine a specific region of interest using as few as 12 spectra (6 s total acquisition time) is achieved. Subsequent use of a higher sampling frequency demonstrates it is possible to delineate the tumor margins in the region of interest with greater certainty. In addition, SESORRS images indicate the emergence of a secondary tumor region deeper within the brain in agreement with MRI and H&E staining. In comparison to traditional Raman imaging approaches, this approach enables improvements in the detection of deep-seated tumors in vivo through depths of several millimeters due to improvements in SNR, spectral resolution, and depth acquisition. This approach offers an opportunity to navigate larger areas of tissues in shorter time frames than previously reported, identify regions of interest, and then image the same area with greater resolution using a higher sampling frequency. Moreover, using a SESORRS approach, we demonstrate that it is possible to detect secondary, deeper-seated lesions through the intact skull.

     
    more » « less
  3. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  4. The persistence of the global COVID-19 pandemic caused by the SARS-CoV-2 virus has continued to emphasize the need for point-of-care (POC) diagnostic tests for viral diagnosis. The most widely used tests, lateral flow assays used in rapid antigen tests, and reverse-transcriptase real-time polymerase chain reaction (RT-PCR), have been instrumental in mitigating the impact of new waves of the pandemic, but fail to provide both sensitive and rapid readout to patients. Here, we present a portable lens-free imaging system coupled with a particle agglutination assay as a novel biosensor for SARS-CoV-2. This sensor images and quantifies individual microbeads undergoing agglutination through a combination of computational imaging and deep learning as a way to detect levels of SARS-CoV-2 in a complex sample. SARS-CoV-2 pseudovirus in solution is incubated with acetyl cholinesterase 2 (ACE2)-functionalized microbeads then loaded into an inexpensive imaging chip. The sample is imaged in a portable in-line lens-free holographic microscope and an image is reconstructed from a pixel superresolved hologram. Images are analyzed by a deep-learning algorithm that distinguishes microbead agglutination from cell debris and viral particle aggregates, and agglutination is quantified based on the network output. We propose an assay procedure using two images which results in the accurate determination of viral concentrations greater than the limit of detection (LOD) of 1.27 × 10 3 copies per mL, with a tested dynamic range of 3 orders of magnitude, without yet reaching the upper limit. This biosensor can be used for fast SARS-CoV-2 diagnosis in low-resource POC settings and has the potential to mitigate the spread of future waves of the pandemic. 
    more » « less
  5. Spatial resolution is critical for observing and monitoring environmental phenomena. Acquiring high-resolution bathymetry data directly from satellites is not always feasible due to limitations on equipment, so spatial data scientists and researchers turn to single image super-resolution (SISR) methods that utilize deep learning techniques as an alternative method to increase pixel density. While super resolution residual networks (e.g., SR-ResNet) are promising for this purpose, several challenges still need to be addressed: (1) Earth data such as bathymetry is expensive to obtain and relatively limited in its data record amount; (2) certain domain knowledge needs to be complied with during model training; (3) certain areas of interest require more accurate measurements than other areas. To address these challenges, following the transfer learning principle, we study how to leverage an existing pre-trained super-resolution deep learning model, namely SR-ResNet, for high-resolution bathymetry data generation. We further enhance the SR-ResNet model to add corresponding loss functions based on domain knowledge. To let the model perform better for certain spatial areas, we add additional loss functions to increase the penalty of the areas of interest. Our experiments show our approaches achieve higher accuracy than most baseline models when evaluating using metrics including MSE, PSNR, and SSIM. 
    more » « less