A conventional optical lens can enhance lateral resolution in optical coherence tomography (OCT) by focusing the input light onto the sample. However, the typical Gaussian beam profile of such a lens will impose a tradeoff between the depth of focus (DOF) and the lateral resolution. The lateral resolution is often compromised to achieve amm-scale DOF. We have experimentally shown that using a cascade system of an ultrasonic virtual tunable optical waveguide (UVTOW) and a short focal-length lens can provide a large DOF without severely compromising the lateral resolution compared to an external lens with the same effective focal length. In addition, leveraging the reconfigurability of UVTOW, we show that the focal length of the cascade system can be tuned without the need for mechanical translation of the optical lens. We compare the performance of the cascade system with a conventional optical lens to demonstrate enhanced DOF without compromising the lateral resolution as well as reconfigurability of UVTOW for OCT imaging.
more »
« less
EDoF-ToF: extended depth of field time-of-flight imaging
Conventional continuous-wave amplitude-modulated time-of-flight (CWAM ToF) cameras suffer from a fundamental trade-off between light throughput and depth of field (DoF): a larger lens aperture allows more light collection but suffers from significantly lower DoF. However, both high light throughput, which increases signal-to-noise ratio, and a wide DoF, which enlarges the system’s applicable depth range, are valuable for CWAM ToF applications. In this work, we propose EDoF-ToF, an algorithmic method to extend the DoF of large-aperture CWAM ToF cameras by using a neural network to deblur objects outside of the lens’s narrow focal region and thus produce an all-in-focus measurement. A key component of our work is the proposed large-aperture ToF training data simulator, which models the depth-dependent blurs and partial occlusions caused by such apertures. Contrary to conventional image deblurring where the blur model is typically linear, ToF depth maps are nonlinear functions of scene intensities, resulting in a nonlinear blur model that we also derive for our simulator. Unlike extended DoF for conventional photography where depth information needs to be encoded (or made depth-invariant) using additional hardware (phase masks, focal sweeping, etc.), ToF sensor measurements naturally encode depth information, allowing a completely software solution to extended DoF. We experimentally demonstrate EDoF-ToF increasing the DoF of a conventional ToF system by 3.6 ×, effectively achieving the DoF of a smaller lens aperture that allows 22.1 × less light. Ultimately, EDoF-ToF enables CWAM ToF cameras to enjoy the benefits of both high light throughput and a wide DoF.
more »
« less
- PAR ID:
- 10307916
- Publisher / Repository:
- Optical Society of America
- Date Published:
- Journal Name:
- Optics Express
- Volume:
- 29
- Issue:
- 23
- ISSN:
- 1094-4087; OPEXFF
- Format(s):
- Medium: X Size: Article No. 38540
- Size(s):
- Article No. 38540
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Light-sheet microscopes must compromise among field of view, optical sectioning, resolution, and detection efficiency. High-numerical-aperture (NA) detection objective lenses provide higher resolution, but their narrow depth of field inefficiently captures the fluorescence signal generated throughout the thickness of the illumination light sheet when imaging large volumes. Here, we present ExD-SPIM (extended depth-of-field selective-plane illumination microscopy), an improved light-sheet microscopy strategy that solves this limitation by extending the depth of field (DOF) of high-NA detection objectives to match the thickness of the illumination light sheet. This extension of the DOF uses a phase mask to axially stretch the point-spread function of the objective lens while largely preserving lateral resolution. This matching of the detection DOF to the illumination-sheet thickness increases the total fluorescence collection, reduces the background, and improves the overall signal-to-noise ratio (SNR), as shown by numerical simulations, imaging of bead phantoms, and imaging living animals. In comparison to conventional light sheet imaging with low-NA detection that yields equivalent DOF, the results show that ExD-SPIM increases the SNR by more than threefold and dramatically reduces the rate of photobleaching. Compared to conventional high-NA detection, ExD-SPIM improves the signal sensitivity and volumetric coverage of whole-brain activity imaging, increasing the number of detected neurons by over a third.more » « less
-
For many clinical applications, such as dermatology, optical coherence tomography (OCT) suffers from limited penetration depth due primarily to the highly scattering nature of biological tissues. Here, we present a novel implementation of dual-axis optical coherence tomography (DA-OCT) that offers improved depth penetration in skin imaging at 1.3 µm compared to conventional OCT. Several unique aspects of DA-OCT are examined here, including the requirements for scattering properties to realize the improvement and the limited depth of focus (DOF) inherent to the technique. To overcome this limitation, our approach uses a tunable lens to coordinate focal plane selection with image acquisition to create an enhanced DOF for DA-OCT. This improvement in penetration depth is quantified experimentally against conventional on-axis OCT using tissue phantoms and mouse skin. The results presented here suggest the potential use of DA-OCT in situations where a high degree of scattering limits depth penetration in OCT imaging.more » « less
-
Passive, compact, single-shot 3D sensing is useful in many application areas such as microscopy, medical imaging, surgical navigation, and autonomous driving where form factor, time, and power constraints can exist. Obtaining RGB-D scene information over a short imaging distance, in an ultra-compact form factor, and in a passive, snapshot manner is challenging. Dual-pixel (DP) sensors are a potential solution to achieve the same. DP sensors collect light rays from two different halves of the lens in two interleaved pixel arrays, thus capturing two slightly different views of the scene, like a stereo camera system. However, imaging with a DP sensor implies that the defocus blur size is directly proportional to the disparity seen between the views. This creates a trade-off between disparity estimation vs. deblurring accuracy. To improve this trade-off effect, we propose CADS (Coded Aperture Dual-Pixel Sensing), in which we use a coded aperture in the imaging lens along with a DP sensor. In our approach, we jointly learn an optimal coded pattern and the reconstruction algorithm in an end-to-end optimization setting. Our resulting CADS imaging system demonstrates improvement of >1.5dB PSNR in all-in-focus (AIF) estimates and 5-6% in depth estimation quality over naive DP sensing for a wide range of aperture settings. Furthermore, we build the proposed CADS prototypes for DSLR photography settings and in an endoscope and a dermoscope form factor. Our novel coded dual-pixel sensing approach demonstrates accurate RGB-D reconstruction results in simulations and real-world experiments in a passive, snapshot, and compact manner.more » « less
-
3D sensing is a primitive function that allows imaging with depth information generally achieved via the time‐of‐flight (ToF) principle. However, time‐to‐digital converters (TDCs) in conventional ToF sensors are usually bulky, complex, and exhibit large delay and power loss. To overcome these issues, a resistive time‐of‐flight (R‐ToF) sensor that can measure the depth information in an analog domain by mimicking the biological process of spike‐timing‐dependent plasticity (STDP) is proposed herein. The R‐ToF sensors based on integrated avalanche photodiodes (APDs) with memristive intelligent matters achieve a scan depth of up to 55 cm (≈89% accuracy and 2.93 cm standard deviation) and low power consumption (0.5 nJ/step) without TDCs. The in‐depth computing is realized via R‐ToF 3D imaging and memristive classification. This R‐ToF system opens a new pathway for miniaturized and energy‐efficient neuromorphic vision engineering that can be harnessed in light‐detection and ranging (LiDAR), automotive vehicles, biomedical in vivo imaging, and augmented/virtual reality.more » « less