Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV). Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and an efficient physics-informed deep learning model that markedly reduces computational demand. Parts of the 3D object can be individually reconstructed and combined. Our deep learning algorithm can reconstruct object volumes over 4 millimeters by 6 millimeters by 0.6 millimeters. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint.
more »
« less
Real-time, deep-learning aided lensless microscope
Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.
more »
« less
- Award ID(s):
- 1730574
- PAR ID:
- 10493064
- Publisher / Repository:
- Biomedical Optics Express
- Date Published:
- Journal Name:
- Biomedical Optics Express
- Volume:
- 14
- Issue:
- 8
- ISSN:
- 2156-7085
- Page Range / eLocation ID:
- 4037
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
SUMMARY Head-mounted miniaturized two-photon microscopes are powerful tools to record neural activity with cellular resolution deep in the mouse brain during unrestrained, free-moving behavior. Two-photon microscopy, however, is traditionally limited in imaging frame rate due to the necessity of raster scanning the laser excitation spot over a large field-of-view (FOV). Here, we present two multiplexed miniature two-photon microscopes (M-MINI2Ps) to increase the imaging frame rate while preserving the spatial resolution. Two different FOVs are imaged simultaneously and then demixed temporally or computationally. We demonstrate large-scale (500×500 µm2FOV) multiplane calcium imaging in visual cortex and prefrontal cortex in freely moving mice for spontaneous activity and auditory stimulus evoked responses. Furthermore, the increased speed of M-MINI2Ps also enables two-photon voltage imaging at 400 Hz over a 380×150 µm2FOV in freely moving mice. M-MINI2Ps have compact footprints and are compatible with the open-source MINI2P. M-MINI2Ps, together with their design principles, allow the capture of faster physiological dynamics and population recordings over a greater volume than currently possible in freely moving mice, and will be a powerful tool in systems neuroscience.more » « less
-
null (Ed.)Lensless imaging has emerged as a potential solution towards realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. Without a focusing lens, the lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, the current iterative-optimization-based reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions. Our approach, called FlatNet, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras, where the camera's forward model formulation is known. FlatNet consists of two stages: (1) an inversion stage that maps the measurement into a space of intermediate reconstruction by learning parameters within the forward model formulation, and (2) a perceptual enhancement stage that improves the perceptual quality of this intermediate reconstruction. These stages are trained together in an end-to-end manner. We show high-quality reconstructions by performing extensive experiments on real and challenging scenes using two different types of lensless prototypes: one which uses a separable forward model and another, which uses a more general non-separable cropped-convolution model. Our end-to-end approach is fast, produces photorealistic reconstructions, and is easy to adopt for other mask-based lensless cameras.more » « less
-
Point scanning imaging systems (e.g. scanning electron or laser scanning confocal microscopes) are perhaps the most widely used tools for high resolution cellular and tissue imaging. Like all other imaging modalities, the resolution, speed, sample preservation, and signal-to-noise ratio (SNR) of point scanning systems are difficult to optimize simultaneously. In particular, point scanning systems are uniquely constrained by an inverse relationship between imaging speed and pixel resolution. Here we show these limitations can be miti gated via the use of deep learning-based super-sampling of undersampled images acquired on a point-scanning system, which we termed point -scanning super-resolution (PSSR) imaging. Oversampled ground truth images acquired on scanning electron or Airyscan laser scanning confocal microscopes were used to generate semi-synthetictrain ing data for PSSR models that were then used to restore undersampled images. Remarkably, our EM PSSR model was able to restore undersampled images acquired with different optics, detectors, samples, or sample preparation methods in other labs . PSSR enabled previously unattainable xy resolution images with our serial block face scanning electron microscope system. For fluorescence, we show that undersampled confocal images combined with a multiframe PSSR model trained on Airyscan timelapses facilitates Airyscan-equivalent spati al resolution and SNR with ~100x lower laser dose and 16x higher frame rates than corresponding high-resolution acquisitions. In conclusion, PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed, and sensitivity.more » « less
-
Machine learning image recognition and classification of particles and materials is a rapidly expanding field. However, nanomaterial identification and classification are dependent on the image resolution, the image field of view, and the processing time. Optical microscopes are one of the most widely utilized technologies in laboratories across the world, due to their nondestructive abilities to identify and classify critical micro-sized objects and processes, but identifying and classifying critical nano-sized objects and processes with a conventional microscope are outside of its capabilities, due to the diffraction limit of the optics and small field of view. To overcome these challenges of nanomaterial identification and classification, we developed an intelligent nanoscope that combines machine learning and microsphere array-based imaging to: (1) surpass the diffraction limit of the microscope objective with microsphere imaging to provide high-resolution images; (2) provide large field-of-view imaging without the sacrifice of resolution by utilizing a microsphere array; and (3) rapidly classify nanomaterials using a deep convolution neural network. The intelligent nanoscope delivers more than 46 magnified images from a single image frame so that we collected more than 1000 images within 2 seconds. Moreover, the intelligent nanoscope achieves a 95% nanomaterial classification accuracy using 1000 images of training sets, which is 45% more accurate than without the microsphere array. The intelligent nanoscope also achieves a 92% bacteria classification accuracy using 50 000 images of training sets, which is 35% more accurate than without the microsphere array. This platform accomplished rapid, accurate detection and classification of nanomaterials with miniscule size differences. The capabilities of this device wield the potential to further detect and classify smaller biological nanomaterial, such as viruses or extracellular vesicles.more » « less
An official website of the United States government

