skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Neuron‐Inspired Time‐of‐Flight Sensing via Spike‐Timing‐Dependent Plasticity of Artificial Synapses
3D sensing is a primitive function that allows imaging with depth information generally achieved via the time‐of‐flight (ToF) principle. However, time‐to‐digital converters (TDCs) in conventional ToF sensors are usually bulky, complex, and exhibit large delay and power loss. To overcome these issues, a resistive time‐of‐flight (R‐ToF) sensor that can measure the depth information in an analog domain by mimicking the biological process of spike‐timing‐dependent plasticity (STDP) is proposed herein. The R‐ToF sensors based on integrated avalanche photodiodes (APDs) with memristive intelligent matters achieve a scan depth of up to 55 cm (≈89% accuracy and 2.93 cm standard deviation) and low power consumption (0.5 nJ/step) without TDCs. The in‐depth computing is realized via R‐ToF 3D imaging and memristive classification. This R‐ToF system opens a new pathway for miniaturized and energy‐efficient neuromorphic vision engineering that can be harnessed in light‐detection and ranging (LiDAR), automotive vehicles, biomedical in vivo imaging, and augmented/virtual reality.  more » « less
Award ID(s):
1942868
PAR ID:
10306991
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Advanced Intelligent Systems
Volume:
4
Issue:
3
ISSN:
2640-4567
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Time-correlated single-photon counting (TCSPC) is an enabling technology for applications such as low-light fluorescence lifetime microscopy and photon counting time-of-flight (ToF) 3D imaging. However, state-of-the-art TCSPC single-photon timing resolution (SPTR) is limited to 3–100 ps by single-photon detectors. Here, we experimentally demonstrate a time-magnified TCSPC (TM-TCSPC) that achieves an ultrashort SPTR of 550 fs with an off-the-shelf single-photon detector. The TM-TCSPC can resolve ultrashort pulses with a 130-fs pulse width difference at a 22-fs accuracy. When applied to photon counting ToF 3D imaging, the TM-TCSPC greatly suppresses the range walk error that limits all photon counting ToF 3D imaging systems by 99.2% and thus provides high depth accuracy and precision of 26 µm and 3 µm, respectively. 
    more » « less
  2. Non-line-of-sight (NLOS) detection and ranging aim to identify hidden objects by sensing indirect light reflections. Although numerous computational methods have been proposed for NLOS detection and imaging, the post-signal processing required by peripheral circuits remains complex. One possible solution for simplifying NLOS detection and ranging involves the use of neuromorphic devices, such as memristors, which have intrinsic resistive-switching capabilities and can store spatiotemporal information. In this study, we employed the memristive spike-timing-dependent plasticity learning rule to program the time-of-flight (ToF) depth information directly into a memristor medium. By coupling the transmitted signal from the source with the photocurrent from the target object into a single memristor unit, we were able to induce a tunable programming pulse based on the time interval between the two signals that were superimposed. Here, this neuromorphic ToF principle is employed to detect and range NLOS objects without requiring complex peripheral circuitry to process raw signals. We experimentally demonstrated the effectiveness of the neuromorphic ToF principle by integrating a HfO2 memristor and an avalanche photodiode to detect NLOS objects in multiple directions. This technology has potential applications in various fields, such as automotive navigation, machine learning, and biomedical engineering. 
    more » « less
  3. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones. 
    more » « less
  4. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones. 
    more » « less
  5. Conventional continuous-wave amplitude-modulated time-of-flight (CWAM ToF) cameras suffer from a fundamental trade-off between light throughput and depth of field (DoF): a larger lens aperture allows more light collection but suffers from significantly lower DoF. However, both high light throughput, which increases signal-to-noise ratio, and a wide DoF, which enlarges the system’s applicable depth range, are valuable for CWAM ToF applications. In this work, we propose EDoF-ToF, an algorithmic method to extend the DoF of large-aperture CWAM ToF cameras by using a neural network to deblur objects outside of the lens’s narrow focal region and thus produce an all-in-focus measurement. A key component of our work is the proposed large-aperture ToF training data simulator, which models the depth-dependent blurs and partial occlusions caused by such apertures. Contrary to conventional image deblurring where the blur model is typically linear, ToF depth maps are nonlinear functions of scene intensities, resulting in a nonlinear blur model that we also derive for our simulator. Unlike extended DoF for conventional photography where depth information needs to be encoded (or made depth-invariant) using additional hardware (phase masks, focal sweeping, etc.), ToF sensor measurements naturally encode depth information, allowing a completely software solution to extended DoF. We experimentally demonstrate EDoF-ToF increasing the DoF of a conventional ToF system by 3.6 ×, effectively achieving the DoF of a smaller lens aperture that allows 22.1 × less light. Ultimately, EDoF-ToF enables CWAM ToF cameras to enjoy the benefits of both high light throughput and a wide DoF. 
    more » « less