skip to main content


Title: Dithered depth imaging

Single-photon lidar (SPL) is a promising technology for depth measurement at long range or from weak reflectors because of the sensitivity to extremely low light levels. However, constraints on the timing resolution of existing arrays of single-photon avalanche diode (SPAD) detectors limit the precision of resulting depth estimates. In this work, we describe an implementation of subtractively-dithered SPL that can recover high-resolution depth estimates despite the coarse resolution of the detector. Subtractively-dithered measurement is achieved by adding programmable delays into the photon timing circuitry that introduce relative time shifts between the illumination and detection that are shorter than the time bin duration. Careful modeling of the temporal instrument response function leads to an estimator that outperforms the sample mean and results in depth estimates with up to 13 times lower root mean-squared error than if dither were not used. The simple implementation and estimation suggest that globally dithered SPAD arrays could be used for high spatial- and temporal-resolution depth sensing.

 
more » « less
Award ID(s):
1815896 1955219 1422034
NSF-PAR ID:
10200653
Author(s) / Creator(s):
; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Express
Volume:
28
Issue:
23
ISSN:
1094-4087; OPEXFF
Page Range / eLocation ID:
Article No. 35143
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Single-photon avalanche diodes (SPADs) are a rapidly developing image sensing technology with extreme low-light sensitivity and picosecond timing resolution. These unique capabilities have enabled SPADs to be used in applications like LiDAR, non-line-of-sight imaging and fluorescence microscopy that require imaging in photon-starved scenarios. In this work we harness these capabilities for dealing with motion blur in a passive imaging setting in low illumination conditions. Our key insight is that the data captured by a SPAD array camera can be represented as a 3D spatio-temporal tensor of photon detection events which can be integrated along arbitrary spatio-temporal trajectories with dynamically varying integration windows, depending on scene motion. We propose an algorithm that estimates pixel motion from photon timestamp data and dynamically adapts the integration windows to minimize motion blur. Our simulation results show the applicability of this algorithm to a variety of motion profiles including translation, rotation and local object motion. We also demonstrate the real-world feasibility of our method on data captured using a 32x32 SPAD camera. 
    more » « less
  2. Single-photon 3D cameras can record the time of arrival of billions of photons per second with picosecond accuracy. One common approach to summarize the photon data stream is to build a per-pixel timestamp histogram, resulting in a 3D histogram tensor that encodes distances along the time axis. As the spatio-temporal resolution of the histogram tensor increases, the in-pixel memory requirements and output data rates can quickly become impractical. To overcome this limitation, we propose a family of linear compressive representations of histogram tensors that can be computed efficiently, in an online fashion, as a matrix operation. We design practical lightweight compressive representations that are amenable to an in-pixel implementation and consider the spatio-temporal information of each timestamp. Furthermore, we implement our proposed framework as the first layer of a neural network, which enables the joint end-to-end optimization of the compressive representations and a downstream SPAD data processing model. We find that a well-designed compressive representation can reduce in-sensor memory and data rates up to 2 orders of magnitude without significantly reducing 3D imaging quality. Finally, we analyze the power consumption implications through an on-chip implementation. 
    more » « less
  3. Abstract

    Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible.

     
    more » « less
  4. Single-photon light detection and ranging (LiDAR) techniques use emerging single-photon detectors (SPADs) to push 3D imaging capabilities to unprecedented ranges. However, it remains challenging to robustly estimate scene depth from the noisy and otherwise corrupted measurements recorded by a SPAD. Here, we propose a deep sensor fusion strategy that combines corrupted SPAD data and a conventional 2D image to estimate the depth of a scene. Our primary contribution is a neural network architecture—SPADnet—that uses a monocular depth estimation algorithm together with a SPAD denoising and sensor fusion strategy. This architecture, together with several techniques in network training, achieves state-of-the-art results for RGB-SPAD fusion with simulated and captured data. Moreover, SPADnet is more computationally efficient than previous RGB-SPAD fusion networks.

     
    more » « less
  5. Techniques to control the spectro-temporal properties of quantum states of light at ultrafast time scales are crucial for numerous applications in quantum information science. In this work, we report an all-optical time lens for quantum signals based on Bragg-scattering four-wave mixing with picosecond resolution. Our system achieves a temporal magnification factor of 158 with single-photon level inputs, which is sufficient to overcome the intrinsic timing jitter of superconducting nanowire single-photon detectors. We demonstrate discrimination of two terahertz-bandwidth, single-photon-level pulses with 2.1 ps resolution (electronic jitter corrected resolution of 1.25 ps). We draw on elegant tools from Fourier optics to further show that the time-lens framework can be extended to perform complex unitary spectro-temporal transformations by imparting optimized temporal and spectral phase profiles to the input waveforms. Using numerical optimization techniques, we show that a four-stage transformation can realize an efficient temporal mode sorter that demultiplexes 10 Hermite–Gaussian (HG) modes. Our time-lens-based framework represents a new toolkit for arbitrary spectro-temporal processing of single photons, with applications in temporal mode quantum processing, high-dimensional quantum key distribution, temporal mode matching for quantum networks, and quantum-enhanced sensing with time-frequency entangled states.

     
    more » « less