One of the open challenges in lensless imaging is understanding how well they resolve scenes in three dimensions. The measurement model underlying prior lensless imagers lacks special structures that facilitate deeper analysis; thus, a theoretical study of the achievable spatio-axial resolution has been lacking. This paper provides such a theoretical framework by analyzing a generalization of a mask-based lensless camera, where the sensor captures z-stacked measurements acquired by moving the sensor relative to an attenuating mask. We show that the z-stacked measurements are related to the scene’s volumetric albedo function via a three-dimensional convolutional operator. The specifics of this convolution, and its Fourier transform, allow us to fully characterize the spatial and axial resolving power of the camera, including its dependence on the mask. Since z-stacked measurements are a superset of those made by previously-studied lensless systems, these results provide an upper bound for their performance. We numerically evaluate the theory and its implications using simulations.
more » « less- NSF-PAR ID:
- 10390661
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 31
- Issue:
- 2
- ISSN:
- 1094-4087; OPEXFF
- Page Range / eLocation ID:
- Article No. 2538
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Lensless imaging has emerged as a potential solution towards realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. Without a focusing lens, the lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements. However, the current iterative-optimization-based reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions. Our approach, called FlatNet, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras, where the camera's forward model formulation is known. FlatNet consists of two stages: (1) an inversion stage that maps the measurement into a space of intermediate reconstruction by learning parameters within the forward model formulation, and (2) a perceptual enhancement stage that improves the perceptual quality of this intermediate reconstruction. These stages are trained together in an end-to-end manner. We show high-quality reconstructions by performing extensive experiments on real and challenging scenes using two different types of lensless prototypes: one which uses a separable forward model and another, which uses a more general non-separable cropped-convolution model. Our end-to-end approach is fast, produces photorealistic reconstructions, and is easy to adopt for other mask-based lensless cameras.more » « less
-
We report a new, to the best of our knowledge, lensless microscopy configuration by integrating the concepts of transverse translational ptychography and defocus multi-height phase retrieval. In this approach, we place a tilted image sensor under the specimen for introducing linearly increasing phase modulation along one lateral direction. Similar to the operation of ptychography, we laterally translate the specimen and acquire the diffraction images for reconstruction. Since the axial distance between the specimen and the sensor varies at different lateral positions, laterally translating the specimen effectively introduces defocus multi-height measurements while eliminating axial scanning. Lateral translation further introduces sub-pixel shift for pixel super-resolution imaging and naturally expands the field of view for rapid whole slide imaging. We show that the equivalent height variation can be precisely estimated from the lateral shift of the specimen, thereby addressing the challenge of precise axial positioning in conventional multi-height phase retrieval. Using a sensor with 1.67 µm pixel size, our low-cost and field-portable prototype can resolve the 690 nm linewidth on the resolution target. We show that a whole slide image of a blood smear with a
field of view can be acquired in 18 s. We also demonstrate accurate automatic white blood cell counting from the recovered image. The reported approach may provide a turnkey solution for addressing point-of-care and telemedicine-related challenges. -
Lensless cameras are ultra-thin imaging systems that replace the lens with a thin passive optical mask and computation. Passive mask-based lensless cameras encode depth information in their measurements for a certain depth range. Early works have shown that this encoded depth can be used to perform 3D reconstruction of close-range scenes. However, these approaches for 3D reconstructions are typically optimization based and require strong hand-crafted priors and hundreds of iterations to reconstruct. Moreover, the reconstructions suffer from low resolution, noise, and artifacts. In this work, we propose
FlatNet3D —a feed-forward deep network that can estimate both depth and intensity from a single lensless capture. FlatNet3D is an end-to-end trainable deep network that directly reconstructs depth and intensity from a lensless measurement using an efficient physics-based 3D mapping stage and a fully convolutional network. Our algorithm is fast and produces high-quality results, which we validate using both simulated and real scenes captured using PhlatCam. -
Ultra-miniaturized microendoscopes are vital for numerous biomedical applications. Such minimally invasive imagers allow for navigation into hard-to-reach regions and observation of deep brain activity in freely moving animals. Conventional solutions use distal microlenses. However, as lenses become smaller and less invasive, they develop greater aberrations and restricted fields of view. In addition, most of the imagers capable of variable focusing require mechanical actuation of the lens, increasing the distal complexity and weight. Here, we demonstrate a distal lens-free approach to microendoscopy enabled by computational image recovery. Our approach is entirely actuation free and uses a single pseudorandom spatial mask at the distal end of a multicore fiber. Experimentally, this lensless approach increases the space-bandwidth product, i.e., field of view divided by resolution, by threefold over a best-case lens- based system. In addition, the microendoscope demonstrates color resolved imaging and refocusing to 11 distinct depth planes from a single camera frame without any actuated parts.more » « less
-
null (Ed.)Lensless imaging is a new, emerging modality where image sensors utilize optical elements in front of the sensor to perform multiplexed imaging. There have been several recent papers to reconstruct images from lensless imagers, including methods that utilize deep learning for state-of-the-art performance. However, many of these methods require explicit knowledge of the optical element, such as the point spread function, or learn the reconstruction mapping for a single fixed PSF. In this paper, we explore a neural network architecture that performs joint image reconstruction and PSF estimation to robustly recover images captured with multiple PSFs from different cameras. Using adversarial learning, this approach achieves improved reconstruction results that do not require explicit knowledge of the PSF at test-time and shows an added improvement in the reconstruction model’s ability to generalize to variations in the camera’s PSF. This allows lensless cameras to be utilized in a wider range of applications that require multiple cameras without the need to explicitly train a separate model for each new camera.more » « less