skip to main content


Title: Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning

Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to teach a traditional wide-field microscope, one that’s available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.

 
more » « less
Award ID(s):
1652633 1801372
NSF-PAR ID:
10369352
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Volume:
13
Issue:
1
ISSN:
2156-7085
Page Range / eLocation ID:
Article No. 284
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We have created an open‐source 3D printable microscope automatic stage and integrated camera system capable of providing a means for imaging microscope slides—the PiAutoStage. The PiAutoStage was developed to interface with the high‐quality optics of existing microscopes by creating an adaptable system that can be used in conjunction with a range of microscope configurations. The PiAutoStage automatically captures the entire area of a microscope slide in a series of overlapping high‐resolution images, which can then be stitched into a single panoramic image. We have demonstrated the utility of the PiAutoStage when attached to a transmitted light microscope by creating high‐fidelity image stacks of rock specimens in plane polarized and cross‐polarized light. We have shown that the PiAutoStage is compatible with microscopes that do not currently have a camera attachment by using two different optical trains within the same microscope: one set of imagery collected through the photography tube of a trinocular microscope, and a second set through a camera mounted to an ocular. We furthermore establish the broad adaptability of the PiAutoStage system by attaching it to a reflected light stereo dissection microscope to capture images of microfossils. We discuss strategies for the online delivery of these large‐sized images in a data efficient manner through the application of tiled imagery and open‐source Java‐based web viewers. The low cost of the PiAutoStage system, combined with the data‐efficient mechanisms of online delivery make this system an important tool in promoting the universal accessibility of high‐resolution microscope imagery.

     
    more » « less
  2. Abstract

    Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram. However, unlike a conventional bright-field microscopy image, the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects. Here, we demonstrate that cross-modality deep learning using a generative adversarial network (GAN) can endow holographic images of a sample volume with bright-field microscopy contrast, combining the volumetric imaging capability of holography with the speckle- and artifact-free image contrast of incoherent bright-field microscopy. We illustrate the performance of this “bright-field holography” method through the snapshot imaging of bioaerosols distributed in 3D, matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope. This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging, and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram, benefiting from the wave-propagation framework of holography.

     
    more » « less
  3. Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.

     
    more » « less
  4. Structured illumination microscopy (SIM) reconstructs optically-sectioned images of a sample from multiple spatially-patterned wide-field images, but the traditional single non-patterned wide-field images are more inexpensively obtained since they do not require generation of specialized illumination patterns. In this work, we translated wide-field fluorescence microscopy images to optically-sectioned SIM images by a Pix2Pix conditional generative adversarial network (cGAN). Our model shows the capability of both 2D cross-modality image translation from wide-field images to optical sections, and further demonstrates potential to recover 3D optically-sectioned volumes from wide-field image stacks. The utility of the model was tested on a variety of samples including fluorescent beads and fresh human tissue samples.

     
    more » « less
  5. null (Ed.)
    Intensity Diffraction Tomography (IDT) is a new computational microscopy technique providing quantitative, volumetric, large field-of-view (FOV) phase imaging of biological samples. This approach uses computationally efficient inverse scattering models to recover 3D phase volumes of weakly scattering objects from intensity measurements taken under diverse illumination at a single focal plane. IDT is easily implemented in a standard microscope equipped with an LED array source and requires no exogenous contrast agents, making the technology widely accessible for biological research.Here, we discuss model and learning-based approaches for complex 3D object recovery with IDT. We present two model-based computational illumination strategies, multiplexed IDT (mIDT) [1] and annular IDT (aIDT) [2], that achieve high-throughput quantitative 3D object phase recovery at hardware-limited 4Hz and 10Hz volume rates, respectively. We illustrate these techniques on living epithelial buccal cells and Caenorhabditis elegans worms. For strong scattering object recovery with IDT, we present an uncertainty quantification framework for assessing the reliability of deep learning-based phase recovery methods [3]. This framework provides per-pixel evaluation of a neural network predictions confidence level, allowing for efficient and reliable complex object recovery. This uncertainty learning framework is widely applicable for reliable deep learning-based biomedical imaging techniques and shows significant potential for IDT. 
    more » « less