skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Exploiting & Refining Depth Distributions with Triangulation Light Curtains
Active sensing through the use of Adaptive Depth Sensors is a nascent field, with potential in areas such as Advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensor being the Triangulation Light Curtains (LC). In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a Light Curtain’s laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a Light Curtain.  more » « less
Award ID(s):
2038612 1900821
PAR ID:
10292848
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Conference on Computer Vision and Pattern Recognition
ISSN:
2163-6648
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones. 
    more » « less
  2. Time-resolved image sensors that capture light at pico-tonanosecond timescales were once limited to niche applications but are now rapidly becoming mainstream in consumer devices. We propose lowcost and low-power imaging modalities that capture scene information from minimal time-resolved image sensors with as few as one pixel. The key idea is to flood illuminate large scene patches (or the entire scene) with a pulsed light source and measure the time-resolved reflected light by integrating over the entire illuminated area. The one-dimensional measured temporal waveform, called transient, encodes both distances and albedoes at all visible scene points and as such is an aggregate proxy for the scene’s 3D geometry. We explore the viability and limitations of the transient waveforms by themselves for recovering scene information, and also when combined with traditional RGB cameras. We show that plane estimation can be performed from a single transient and that using only a few more it is possible to recover a depth map of the whole scene. We also show two proof-of-concept hardware prototypes that demonstrate the feasibility of our approach for compact, mobile, and budget-limited applications. 
    more » « less
  3. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones. 
    more » « less
  4. Few colorimetric analyses of natural rainbows (i.e., bows seen in rain showers) have been published, and these are limited either to approximate techniques (colorimetrically calibrated red–green–blue (RGB) cameras) or to rainbow proxies (bows seen in sunlit water-drop sprays). Furthermore, no research papers provide angularly detailed spectra of natural rainbows in the visible and near-IR. Thus some uncertainty exists about whether the published spectra and colors differ perceptibly from those in natural rainbows. However, battery-powered imaging spectrometers now make possible direct field measurements of the observed chromaticities and spectra in such bows. These data (1) show consistent spectral and colorimetric patterns along rainbow radii and (2) let one subtract additively mixed background light to reveal the intrinsic colors and spectra produced by rainbow scattering in nature. 
    more » « less
  5. In this work, we integrate digital twin technology with RFID localization to achieve real-time monitoring of physical items in a large-scale complex environment, such as warehouses and retail stores. To map the item-level realities into a digital environment, we proposed a sensor fusion technique that merges a 3D map created by RGB-D and tracking cameras with real-time RFID tag location estimation derived from our novel Bayesian filter approach. Unlike mainstream localization methods, which rely on phase or RSSI measurements, our proposed method leverages a fixed RF transmission power model. This approach extends localization capabilities to all existing RFID devices, offering a significant advancement over conventional techniques. As a result, the proposed method transforms any RFID device into a digital twin scanner with the support of RGB-D cameras. To evaluate the performance of the proposed method, we prototype the system with commercial off-the-shelf (COTS) equipment in two representative retail scenarios. The overall performance of the system is demonstrated in a mock retail apparel store covering an area of 207 m2, while the quantitative experimental results are examined in a small-scale testbed to showcase the accuracy of item-level tag localization. 
    more » « less