The two-point-source resolution criterion is widely used to quantify the performance of imaging systems. The two main approaches for the computation of the two-point-source resolution are the detection theoretic and visual analyses. The first assumes a shift-invariant system and lacks the ability to incorporate two different point spread functions (PSFs), which may be required in certain situations like computing axial resolution. The latter approach, which includes the Rayleigh criterion, relies on the peak-to-valley ratio and does not properly account for the presence of noise. We present a heuristic generalization of the visual two-point-source resolution criterion using Gaussian processes (GP). This heuristic criterion is applicable to both shift-invariant and shift-variant imaging modalities. This criterion can also incorporate different definitions of resolution expressed in terms of varying peak-to-valley ratios. Our approach implicitly incorporates information about noise statistics such as the variance or signal-to-noise ratio by making assumptions about the spatial correlation of PSFs in the form of kernel functions. Also, it does not rely on an analytic form of the PSF.
more »
« less
Longitudinal resolution of three-dimensional integral imaging in the presence of noise
The two-point source longitudinal resolution of three-dimensional integral imaging depends on several factors including the number of sensors, sensor pixel size, pitch between sensors, and the lens point spread function. We assume the two-point sources to be resolved if their point spread functions can be resolved in any one of the sensors. Previous studies of integral imaging longitudinal resolution either rely on geometrical optics formulation or assume the point spread function to be of sub-pixel size, thus neglecting the effect of the lens. These studies also assume both point sources to be in focus in captured elemental images. More importantly, the previous analysis does not consider the effect of noise. In this manuscript, we use the Gaussian process-based two-point source resolution criterion to overcome these limitations. We compute the circle of confusion to model the out-of-focus blurring effect. The Gaussian process-based two-point source resolution criterion allows us to study the effect of noise on the longitudinal resolution. In the absence of noise, we also present a simple analytical expression for longitudinal resolution which approximately matches the Gaussian process-based formulation. Also, we investigate the dependence of the longitudinal resolution on the parallax of the integral imaging system. We present optical experiments to validate our results. The experiments demonstrate agreement with our Gaussian process-based two-point source resolution criteria.
more »
« less
- Award ID(s):
- 2141473
- PAR ID:
- 10550363
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 32
- Issue:
- 23
- ISSN:
- 1094-4087; OPEXFF
- Format(s):
- Medium: X Size: Article No. 40605
- Size(s):
- Article No. 40605
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper, we have used the angular spectrum propagation method and numerical simulations of a single random phase encoding (SRPE) based lensless imaging system, with the goal of quantifying the spatial resolution of the system and assessing its dependence on the physical parameters of the system. Our compact SRPE imaging system consists of a laser diode that illuminates a sample placed on a microscope glass slide, a diffuser that spatially modulates the optical field transmitting through the input object, and an image sensor that captures the intensity of the modulated field. We have considered two-point source apertures as the input object and analyzed the propagated optical field captured by the image sensor. The captured output intensity patterns acquired at each lateral separation between the input point sources were analyzed using a correlation between the captured output pattern for the overlapping point-sources, and the captured output intensity for the separated point sources. The lateral resolution of the system was calculated by finding the lateral separation values of the point sources for which the correlation falls below a threshold value of 35% which is a value chosen in accordance with the Abbe diffraction limit of an equivalent lens-based system. A direct comparison between the SRPE lensless imaging system and an equivalent lens-based imaging system with similar system parameters shows that despite being lensless, the performance of the SRPE system does not suffer as compared to lens-based imaging systems in terms of lateral resolution. We have also investigated how this resolution is affected as the parameters of the lensless imaging system are varied. The results show that SRPE lensless imaging system shows robustness to object to diffuser-to-sensor distance, pixel size of the image sensor, and the number of pixels of the image sensor. To the best of our knowledge, this is the first work to investigate a lensless imaging system’s lateral resolution, robustness to multiple physical parameters of the system, and comparison to lens-based imaging systems.more » « less
-
null (Ed.)Abstract We study the ubiquitous super-resolution problem, in which one aims at localizing positive point sources in an image, blurred by the point spread function of the imaging device. To recover the point sources, we propose to solve a convex feasibility program, which simply finds a non-negative Borel measure that agrees with the observations collected by the imaging device. In the absence of imaging noise, we show that solving this convex program uniquely retrieves the point sources, provided that the imaging device collects enough observations. This result holds true if the point spread function of the imaging device can be decomposed into horizontal and vertical components and if the translations of these components form a Chebyshev system, i.e., a system of continuous functions that loosely behave like algebraic polynomials. Building upon the recent results for one-dimensional signals, we prove that this super-resolution algorithm is stable, in the generalized Wasserstein metric, to model mismatch (i.e., when the image is not sparse) and to additive imaging noise. In particular, the recovery error depends on the noise level and how well the image can be approximated with well-separated point sources. As an example, we verify these claims for the important case of a Gaussian point spread function. The proofs rely on the construction of novel interpolating polynomials—which are the main technical contribution of this paper—and partially resolve the question raised in Schiebinger et al. (2017, Inf. Inference, 7, 1–30) about the extension of the standard machinery to higher dimensions.more » « less
-
Abstract Implantable image sensors have the potential to revolutionize neuroscience. Due to their small form factor requirements; however, conventional filters and optics cannot be implemented. These limitations obstruct high-resolution imaging of large neural densities. Recent advances in angle-sensitive image sensors and single-photon avalanche diodes have provided a path toward ultrathin lens-less fluorescence imaging, enabling plenoptic sensing by extending sensing capabilities to include photon arrival time and incident angle, thereby providing the opportunity for separability of fluorescence point sources within the context of light-field microscopy (LFM). However, the addition of spectral sensitivity to angle-sensitive LFM reduces imager resolution because each wavelength requires a separate pixel subset. Here, we present a 1024-pixel, 50 µm thick implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels. This angular-spectral sensitive front end combines a metal–insulator–metal (MIM) Fabry–Perot color filter and diffractive optics to produce the measurement of orthogonal light-field information from two distinct colors within a single photodetector. The result is the ability to add independent color sensing to LFM while doubling the effective pixel density. The implantable imager combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets. In this initial prototype, this is demonstrated with 45 μm diameter fluorescently labeled beads in scattering medium. Fluorescent lifetime imaging is exploited to further aid source separation, in addition to detecting pH through lifetime changes in fluorescent dyes. While these initial fluorescent targets are considerably brighter than fluorescently labeled neurons, further improvements will allow the application of these techniques to in-vivo multifluorescent structural and functional neural imaging.more » « less
-
Passive, compact, single-shot 3D sensing is useful in many application areas such as microscopy, medical imaging, surgical navigation, and autonomous driving where form factor, time, and power constraints can exist. Obtaining RGB-D scene information over a short imaging distance, in an ultra-compact form factor, and in a passive, snapshot manner is challenging. Dual-pixel (DP) sensors are a potential solution to achieve the same. DP sensors collect light rays from two different halves of the lens in two interleaved pixel arrays, thus capturing two slightly different views of the scene, like a stereo camera system. However, imaging with a DP sensor implies that the defocus blur size is directly proportional to the disparity seen between the views. This creates a trade-off between disparity estimation vs. deblurring accuracy. To improve this trade-off effect, we propose CADS (Coded Aperture Dual-Pixel Sensing), in which we use a coded aperture in the imaging lens along with a DP sensor. In our approach, we jointly learn an optimal coded pattern and the reconstruction algorithm in an end-to-end optimization setting. Our resulting CADS imaging system demonstrates improvement of >1.5dB PSNR in all-in-focus (AIF) estimates and 5-6% in depth estimation quality over naive DP sensing for a wide range of aperture settings. Furthermore, we build the proposed CADS prototypes for DSLR photography settings and in an endoscope and a dermoscope form factor. Our novel coded dual-pixel sensing approach demonstrates accurate RGB-D reconstruction results in simulations and real-world experiments in a passive, snapshot, and compact manner.more » « less
An official website of the United States government
