skip to main content

Title: Deep-3D microscope: 3D volumetric microscopy of thick scattering samples using a wide-field microscope and machine learning

Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to teach a traditional wide-field microscope, one that’s available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.

more » « less
Award ID(s):
1652633 1801372
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Page Range / eLocation ID:
Article No. 284
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Digital holographic microscopy enables the 3D reconstruction of volumetric samples from a single-snapshot hologram. However, unlike a conventional bright-field microscopy image, the quality of holographic reconstructions is compromised by interference fringes as a result of twin images and out-of-plane objects. Here, we demonstrate that cross-modality deep learning using a generative adversarial network (GAN) can endow holographic images of a sample volume with bright-field microscopy contrast, combining the volumetric imaging capability of holography with the speckle- and artifact-free image contrast of incoherent bright-field microscopy. We illustrate the performance of this “bright-field holography” method through the snapshot imaging of bioaerosols distributed in 3D, matching the artifact-free image contrast and axial sectioning performance of a high-NA bright-field microscope. This data-driven deep-learning-based imaging method bridges the contrast gap between coherent and incoherent imaging, and enables the snapshot 3D imaging of objects with bright-field contrast from a single hologram, benefiting from the wave-propagation framework of holography.

    more » « less
  2. Abstract

    We have created an open‐source 3D printable microscope automatic stage and integrated camera system capable of providing a means for imaging microscope slides—the PiAutoStage. The PiAutoStage was developed to interface with the high‐quality optics of existing microscopes by creating an adaptable system that can be used in conjunction with a range of microscope configurations. The PiAutoStage automatically captures the entire area of a microscope slide in a series of overlapping high‐resolution images, which can then be stitched into a single panoramic image. We have demonstrated the utility of the PiAutoStage when attached to a transmitted light microscope by creating high‐fidelity image stacks of rock specimens in plane polarized and cross‐polarized light. We have shown that the PiAutoStage is compatible with microscopes that do not currently have a camera attachment by using two different optical trains within the same microscope: one set of imagery collected through the photography tube of a trinocular microscope, and a second set through a camera mounted to an ocular. We furthermore establish the broad adaptability of the PiAutoStage system by attaching it to a reflected light stereo dissection microscope to capture images of microfossils. We discuss strategies for the online delivery of these large‐sized images in a data efficient manner through the application of tiled imagery and open‐source Java‐based web viewers. The low cost of the PiAutoStage system, combined with the data‐efficient mechanisms of online delivery make this system an important tool in promoting the universal accessibility of high‐resolution microscope imagery.

    more » « less
  3. Open-top light-sheet (OTLS) microscopes have been developed for user-friendly and versatile high-throughput 3D microscopy of thick specimens. As with all imaging modalities, spatial resolution trades off with imaging and analysis times. A hierarchical multi-scale imaging workflow would therefore be of value for many volumetric microscopy applications. We describe a compact multi-resolution OTLS microscope, enabled by a novel solid immersion meniscus lens (SIMlens), which allows users to rapidly transition between air-based objectives for low- and high-resolution 3D imaging. We demonstrate the utility of this system by showcasing an efficient 3D analysis workflow for a diagnostic pathology application.

    more » « less
  4. Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.

    more » « less
  5. There has been recent interest in the development of fluorescence microscopes that provide high-speed volumetric imaging for life-science applications. For example, multi-z confocal microscopy enables simultaneous optically-sectioned imaging at multiple depths over relatively large fields of view. However, to date, multi-z microscopy has been hampered by limited spatial resolution owing to its initial design. Here we present a variant of multi-z microscopy that recovers the full spatial resolution of a conventional confocal microscope while retaining the simplicity and ease of use of our initial design. By introducing a diffractive optical element in the illumination path of our microscope, we engineer the excitation beam into multiple tightly focused spots that are conjugated to axially distributed confocal pinholes. We discuss the performance of this multi-z microscope in terms of resolution and detectability and demonstrate its versatility by performingin-vivoimaging of beating cardiomyocytes in engineered heart tissues and neuronal activity inc. elegansand zebrafish brains.

    more » « less