skip to main content


Title: Deep Learning-Based Point-Scanning Super-Resolution Imaging
Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be miti gated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point -scanning super-resolution (PSSR) imaging. Oversampled ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were used to generate semi-synthetictrain ing data for PSSR models that were then used to restore undersampled images. Remarkably, our EM PSSR model was able to restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs . PSSR enabled previously unattainable xy resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spati al resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.  more » « less
Award ID(s):
1707356
NSF-PAR ID:
10171019
Author(s) / Creator(s):
Date Published:
Journal Name:
PloS one
ISSN:
1932-6203
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to teach a traditional wide-field microscope, one that’s available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.

     
    more » « less
  2. Recent advancements in image-scanning microscopy have significantly enriched super-resolution biological research, providing deeper insights into cellular structures and processes. However, current image-scanning techniques often require complex instrumentation and alignment, constraining their broader applicability in cell biological discovery and convenient, cost-effective integration into commonly used frameworks like epi-fluorescence microscopes. Here, we introduce three-dimensional multifocal scanning microscopy (3D-MSM) for super-resolution imaging of cells and tissue with substantially reduced instrumental complexity. This method harnesses the inherent 3D movement of specimens to achieve stationary, multi-focal excitation and super-resolution microscopy through a standard epi-fluorescence platform. We validated the system using a range of phantom, single-cell, and tissue specimens. The combined strengths of structured illumination, confocal detection, and epi-fluorescence setup result in two-fold resolution improvement in all three dimensions, effective optical sectioning, scalable volume acquisition, and compatibility with general imaging and sample protocols. We anticipate that 3D-MSM will pave a promising path for future super-resolution investigations in cell and tissue biology.

     
    more » « less
  3. Abstract

    Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural network using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.

     
    more » « less
  4. Magnetic resonance imaging (MRI) is a highly significant imaging platform for a variety of medical and research applications. However, the low spatiotemporal resolution of conventional MRI limits its applicability toward rapid acquisition of ultrahigh-resolution scans. Current aims at high-resolution MRI focus on increasing the accuracy of tissue delineation, as- sessments of structural integrity, and early identification of malignancies. Unfortunately, high-resolution imaging often leads to decreased signal/noise (SNR) and contrast/noise (CNR) ratios and increased time cost, which are unfeasible in many clinical and academic settings, offsetting any potential benefits. In this study, we apply and assess the efficacy of super-res- olution reconstruction (SRR) through iterative back-projection utilizing through-plane voxel offsets. SRR allows for high-res- olution imaging in condensed time frames. Rat skulls and archerfish samples, typical models in academic settings, were used to demonstrate the impact of SRR on varying sample sizes and applicability for translational and comparative neuroscience. The SNR and CNR increased in samples that did not fully occupy the imaging probe and in instances where the low-resolution data were acquired in three dimensions, while the CNR was found to increase with both 3D and 2D low-resolution data recon- structions when compared with directly acquired high-resolution images. Limitations to the applied SRR algorithm were inves- tigated to determine the maximum ratios between low-resolution inputs and high-resolution reconstructions and the overall cost effectivity of the strategy. Overall, the study revealed that SRR could be used to decrease image acquisition time, in- crease the CNR in nearly all instances, and increase the SNR in small samples. 
    more » « less
  5. Abstract

    In the field of optical imaging, the ability to image tumors at depth with high selectivity and specificity remains a challenge. Surface enhanced resonance Raman scattering (SERRS) nanoparticles (NPs) can be employed as image contrast agents to specifically target cells in vivo; however, this technique typically requires time-intensive point-by-point acquisition of Raman spectra. Here, we combine the use of “spatially offset Raman spectroscopy” (SORS) with that of SERRS in a technique known as “surface enhanced spatially offset resonance Raman spectroscopy” (SESORRS) to image deep-seated tumors in vivo. Additionally, by accounting for the laser spot size, we report an experimental approach for detecting both the bulk tumor, subsequent delineation of tumor margins at high speed, and the identification of a deeper secondary region of interest with fewer measurements than are typically applied. To enhance light collection efficiency, four modifications were made to a previously described custom-built SORS system. Specifically, the following parameters were increased: (i) the numerical aperture (NA) of the lens, from 0.2 to 0.34; (ii) the working distance of the probe, from 9 mm to 40 mm; (iii) the NA of the fiber, from 0.2 to 0.34; and (iv) the fiber diameter, from 100 µm to 400 µm. To calculate the sampling frequency, which refers to the number of data point spectra obtained for each image, we considered the laser spot size of the elliptical beam (6 × 4 mm). Using SERRS contrast agents, we performed in vivo SESORRS imaging on a GL261-Luc mouse model of glioblastoma at four distinct sampling frequencies: par-sampling frequency (12 data points collected), and over-frequency sampling by factors of 2 (35 data points collected), 5 (176 data points collected), and 10 (651 data points collected). In comparison to the previously reported SORS system, the modified SORS instrument showed a 300% improvement in signal-to-noise ratios (SNR). The results demonstrate the ability to acquire distinct Raman spectra from deep-seated glioblastomas in mice through the skull using a low power density (6.5 mW/mm2) and 30-times shorter integration times than a previous report (0.5 s versus 15 s). The ability to map the whole head of the mouse and determine a specific region of interest using as few as 12 spectra (6 s total acquisition time) is achieved. Subsequent use of a higher sampling frequency demonstrates it is possible to delineate the tumor margins in the region of interest with greater certainty. In addition, SESORRS images indicate the emergence of a secondary tumor region deeper within the brain in agreement with MRI and H&E staining. In comparison to traditional Raman imaging approaches, this approach enables improvements in the detection of deep-seated tumors in vivo through depths of several millimeters due to improvements in SNR, spectral resolution, and depth acquisition. This approach offers an opportunity to navigate larger areas of tissues in shorter time frames than previously reported, identify regions of interest, and then image the same area with greater resolution using a higher sampling frequency. Moreover, using a SESORRS approach, we demonstrate that it is possible to detect secondary, deeper-seated lesions through the intact skull.

     
    more » « less