skip to main content

Title: Low-latency time-of-flight non-line-of-sight imaging at 5 frames per second
Abstract

Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible.

Authors:
; ; ; ; ; ; ;
Publication Date:
NSF-PAR ID:
10306359
Journal Name:
Nature Communications
Volume:
12
Issue:
1
ISSN:
2041-1723
Publisher:
Nature Publishing Group
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enablemore »the formation of 2.5-dimensional representations—a plan view plus heights—and a 180 ∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations.« less
  2. Edge-resolved transient imaging (ERTI) is a method for non-line-of-sight imaging that combines the use of direct time of flight for measuring distances with the azimuthal angular resolution afforded by a vertical edge occluder. Recently conceived and demonstrated for the first time, no performance analyses or optimizations of ERTI have appeared in published papers. This paper explains how the difficulty of detection of hidden scene objects with ERTI depends on a variety of parameters, including illumination power, acquisition time, ambient light, visible-side reflectivity, hidden-side reflectivity, target range, and target azimuthal angular position. Based on this analysis, optimization of the acquisition processmore »is introduced whereby the illumination dwell times are varied to counteract decreasing signal-to-noise ratio at deeper angles into the hidden volume. Inaccuracy caused by a coaxial approximation is also analyzed and simulated.« less
  3. Abstract

    Optical coherence tomography (OCT) is a widely used non-invasive biomedical imaging modality that can rapidly provide volumetric images of samples. Here, we present a deep learning-based image reconstruction framework that can generate swept-source OCT (SS-OCT) images using undersampled spectral data, without any spatial aliasing artifacts. This neural network-based image reconstruction does not require any hardware changes to the optical setup and can be easily integrated with existing swept-source or spectral-domain OCT systems to reduce the amount of raw spectral data to be acquired. To show the efficacy of this framework, we trained and blindly tested a deep neural networkmore »using mouse embryo samples imaged by an SS-OCT system. Using 2-fold undersampled spectral data (i.e., 640 spectral points per A-line), the trained neural network can blindly reconstruct 512 A-lines in 0.59 ms using multiple graphics-processing units (GPUs), removing spatial aliasing artifacts due to spectral undersampling, also presenting a very good match to the images of the same samples, reconstructed using the full spectral OCT data (i.e., 1280 spectral points per A-line). We also successfully demonstrate that this framework can be further extended to process 3× undersampled spectral data per A-line, with some performance degradation in the reconstructed image quality compared to 2× spectral undersampling. Furthermore, an A-line-optimized undersampling method is presented by jointly optimizing the spectral sampling locations and the corresponding image reconstruction network, which improved the overall imaging performance using less spectral data points per A-line compared to 2× or 3× spectral undersampling results. This deep learning-enabled image reconstruction approach can be broadly used in various forms of spectral-domain OCT systems, helping to increase their imaging speed without sacrificing image resolution and signal-to-noise ratio.

    « less
  4. Non-line-of-sight (NLOS) imaging is a rapidly advancing technology that provides asymmetric vision: seeing without being seen. Though limited in accuracy, resolution, and depth recovery compared to active methods, the capabilities of passive methods are especially surprising because they typically use only a single, inexpensive digital camera. One of the largest challenges in passive NLOS imaging is ambient background light, which limits the dynamic range of the measurement while carrying no useful information about the hidden part of the scene. In this work we propose a new reconstruction approach that uses an optimized linear transformation to balance the rejection of uninformativemore »light with the retention of informative light, resulting in fast (video-rate) reconstructions of hidden scenes from photographs of a blank wall under high ambient light conditions.« less
  5. Abstract

    Cameras with extreme speeds are enabling technologies in both fundamental and applied sciences. However, existing ultrafast cameras are incapable of coping with extended three-dimensional scenes and fall short for non-line-of-sight imaging, which requires a long sequence of time-resolved two-dimensional data. Current non-line-of-sight imagers, therefore, need to perform extensive scanning in the spatial and/or temporal dimension, restricting their use in imaging only static or slowly moving objects. To address these long-standing challenges, we present here ultrafast light field tomography (LIFT), a transient imaging strategy that offers a temporal sequence of over 1000 and enables highly efficient light field acquisition, allowingmore »snapshot acquisition of the complete four-dimensional space and time. With LIFT, we demonstrated three-dimensional imaging of light in flight phenomena with a <10 picoseconds resolution and non-line-of-sight imaging at a 30 Hz video-rate. Furthermore, we showed how LIFT can benefit from deep learning for an improved and accelerated image formation. LIFT may facilitate broad adoption of time-resolved methods in various disciplines.

    « less