skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Real-time, deep-learning aided lensless microscope
Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.  more » « less
Award ID(s):
1730574
PAR ID:
10430588
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Biomedical Optics Express
Volume:
14
Issue:
8
ISSN:
2156-7085
Format(s):
Medium: X Size: Article No. 4037
Size(s):
Article No. 4037
Sponsoring Org:
National Science Foundation
More Like this
  1. Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV). Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and an efficient physics-informed deep learning model that markedly reduces computational demand. Parts of the 3D object can be individually reconstructed and combined. Our deep learning algorithm can reconstruct object volumes over 4 millimeters by 6 millimeters by 0.6 millimeters. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. 
    more » « less
  2. Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to teach a traditional wide-field microscope, one that’s available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope. 
    more » « less
  3. SUMMARY Head-mounted miniaturized two-photon microscopes are powerful tools to record neural activity with cellular resolution deep in the mouse brain during unrestrained, free-moving behavior. Two-photon microscopy, however, is traditionally limited in imaging frame rate due to the necessity of raster scanning the laser excitation spot over a large field-of-view (FOV). Here, we present two multiplexed miniature two-photon microscopes (M-MINI2Ps) to increase the imaging frame rate while preserving the spatial resolution. Two different FOVs are imaged simultaneously and then demixed temporally or computationally. We demonstrate large-scale (500×500 µm2FOV) multiplane calcium imaging in visual cortex and prefrontal cortex in freely moving mice for spontaneous activity and auditory stimulus evoked responses. Furthermore, the increased speed of M-MINI2Ps also enables two-photon voltage imaging at 400 Hz over a 380×150 µm2FOV in freely moving mice. M-MINI2Ps have compact footprints and are compatible with the open-source MINI2P. M-MINI2Ps, together with their design principles, allow the capture of faster physiological dynamics and population recordings over a greater volume than currently possible in freely moving mice, and will be a powerful tool in systems neuroscience. 
    more » « less
  4. null (Ed.)
    Lensless imaging has emerged as a potential solution towards realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. Without a focusing lens, the lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, the current iterative-optimization-based reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions. Our approach, called FlatNet, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras, where the camera's forward model formulation is known. FlatNet consists of two stages: (1) an inversion stage that maps the measurement into a space of intermediate reconstruction by learning parameters within the forward model formulation, and (2) a perceptual enhancement stage that improves the perceptual quality of this intermediate reconstruction. These stages are trained together in an end-to-end manner. We show high-quality reconstructions by performing extensive experiments on real and challenging scenes using two different types of lensless prototypes: one which uses a separable forward model and another, which uses a more general non-separable cropped-convolution model. Our end-to-end approach is fast, produces photorealistic reconstructions, and is easy to adopt for other mask-based lensless cameras. 
    more » « less
  5. Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be miti gated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point -scanning super-resolution (PSSR) imaging. Oversampled ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were used to generate semi-synthetictrain ing data for PSSR models that were then used to restore undersampled images. Remarkably, our EM PSSR model was able to restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs . PSSR enabled previously unattainable xy resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spati al resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity. 
    more » « less