Confocal microscopy is a standard approach for obtaining volumetric images of a sample with high axial and lateral resolution, especially when dealing with scattering samples. Unfortunately, a confocal microscope is quite expensive compared to traditional microscopes. In addition, the point scanning in confocal microscopy leads to slow imaging speed and photobleaching due to the high dose of laser energy. In this paper, we demonstrate how the advances in machine learning can be exploited to teach a traditional wide-field microscope, one that’s available in every lab, into producing 3D volumetric images like a confocal microscope. The key idea is to obtain multiple images with different focus settings using a wide-field microscope and use a 3D generative adversarial network (GAN) based neural network to learn the mapping between the blurry low-contrast image stacks obtained using a wide-field microscope and the sharp, high-contrast image stacks obtained using a confocal microscope. After training the network with widefield-confocal stack pairs, the network can reliably and accurately reconstruct 3D volumetric images that rival confocal images in terms of its lateral resolution, z-sectioning and image contrast. Our experimental results demonstrate generalization ability to handle unseen data, stability in the reconstruction results, high spatial resolution even when imaging thick (∼40 microns) highly-scattering samples. We believe that such learning-based microscopes have the potential to bring confocal imaging quality to every lab that has a wide-field microscope.
more »
« less
Real-time, deep-learning aided lensless microscope
Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.
more »
« less
- Award ID(s):
- 1730574
- PAR ID:
- 10430588
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Biomedical Optics Express
- Volume:
- 14
- Issue:
- 8
- ISSN:
- 2156-7085
- Format(s):
- Medium: X Size: Article No. 4037
- Size(s):
- Article No. 4037
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Lensless imaging has emerged as a potential solution towards realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. Without a focusing lens, the lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, the current iterative-optimization-based reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions. Our approach, called FlatNet, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras, where the camera's forward model formulation is known. FlatNet consists of two stages: (1) an inversion stage that maps the measurement into a space of intermediate reconstruction by learning parameters within the forward model formulation, and (2) a perceptual enhancement stage that improves the perceptual quality of this intermediate reconstruction. These stages are trained together in an end-to-end manner. We show high-quality reconstructions by performing extensive experiments on real and challenging scenes using two different types of lensless prototypes: one which uses a separable forward model and another, which uses a more general non-separable cropped-convolution model. Our end-to-end approach is fast, produces photorealistic reconstructions, and is easy to adopt for other mask-based lensless cameras.more » « less
-
Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be miti gated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point -scanning super-resolution (PSSR) imaging. Oversampled ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were used to generate semi-synthetictrain ing data for PSSR models that were then used to restore undersampled images. Remarkably, our EM PSSR model was able to restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs . PSSR enabled previously unattainable xy resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spati al resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.more » « less
-
Machine learning image recognition and classification of particles and materials is a rapidly expanding field. However, nanomaterial identification and classification are dependent on the image resolution, the image field of view, and the processing time. Optical microscopes are one of the most widely utilized technologies in laboratories across the world, due to their nondestructive abilities to identify and classify critical micro-sized objects and processes, but identifying and classifying critical nano-sized objects and processes with a conventional microscope are outside of its capabilities, due to the diffraction limit of the optics and small field of view. To overcome these challenges of nanomaterial identification and classification, we developed an intelligent nanoscope that combines machine learning and microsphere array-based imaging to: (1) surpass the diffraction limit of the microscope objective with microsphere imaging to provide high-resolution images; (2) provide large field-of-view imaging without the sacrifice of resolution by utilizing a microsphere array; and (3) rapidly classify nanomaterials using a deep convolution neural network. The intelligent nanoscope delivers more than 46 magnified images from a single image frame so that we collected more than 1000 images within 2 seconds. Moreover, the intelligent nanoscope achieves a 95% nanomaterial classification accuracy using 1000 images of training sets, which is 45% more accurate than without the microsphere array. The intelligent nanoscope also achieves a 92% bacteria classification accuracy using 50 000 images of training sets, which is 35% more accurate than without the microsphere array. This platform accomplished rapid, accurate detection and classification of nanomaterials with miniscule size differences. The capabilities of this device wield the potential to further detect and classify smaller biological nanomaterial, such as viruses or extracellular vesicles.more » « less
-
Abstract The simple and compact optics of lensless microscopes and the associated computational algorithms allow for large fields of view and the refocusing of the captured images. However, existing lensless techniques cannot accurately reconstruct the typical low-contrast images of optically dense biological tissue. Here we show that lensless imaging of tissue in vivo can be achieved via an optical phase mask designed to create a point spread function consisting of high-contrast contours with a broad spectrum of spatial frequencies. We built a prototype lensless microscope incorporating the ‘contour’ phase mask and used it to image calcium dynamics in the cortex of live mice (over a field of view of about 16 mm 2 ) and in freely moving Hydra vulgaris , as well as microvasculature in the oral mucosa of volunteers. The low cost, small form factor and computational refocusing capability of in vivo lensless microscopy may open it up to clinical uses, especially for imaging difficult-to-reach areas of the body.more » « less