skip to main content

Title: Exploiting & Refining Depth Distributions with Triangulation Light Curtains
Active sensing through the use of Adaptive Depth Sensors is a nascent field, with potential in areas such as Advanced driver-assistance systems (ADAS). They do however require dynamically driving a laser / light-source to a specific location to capture information, with one such class of sensor being the Triangulation Light Curtains (LC). In this work, we introduce a novel approach that exploits prior depth distributions from RGB cameras to drive a Light Curtain’s laser line to regions of uncertainty to get new measurements. These measurements are utilized such that depth uncertainty is reduced and errors get corrected recursively. We show real-world experiments that validate our approach in outdoor and driving settings, and demonstrate qualitative and quantitative improvements in depth RMSE when RGB cameras are used in tandem with a Light Curtain.
Authors:
; ; ; ;
Award ID(s):
2038612 1900821
Publication Date:
NSF-PAR ID:
10292848
Journal Name:
IEEE Conference on Computer Vision and Pattern Recognition
ISSN:
2163-6648
Sponsoring Org:
National Science Foundation
More Like this
  1. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors that are now available on modern smartphones.
  2. Neural networks can represent and accurately reconstruct radiance fields for static 3D scenes (e.g., NeRF). Several works extend these to dynamic scenes captured with monocular video, with promising performance. However, the monocular setting is known to be an under-constrained problem, and so methods rely on data-driven priors for reconstructing dynamic content. We replace these priors with measurements from a time-of-flight (ToF) camera, and introduce a neural representation based on an image formation model for continuous-wave ToF cameras. Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor's limited unambiguous depth range. We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions, and discuss the benefits and limitations of integrating RGB+ToF sensors now available on modern smartphones.
  3. Few colorimetric analyses of natural rainbows (i.e., bows seen in rain showers) have been published, and these are limited either to approximate techniques (colorimetrically calibrated red–green–blue (RGB) cameras) or to rainbow proxies (bows seen in sunlit water-drop sprays). Furthermore, no research papers provide angularly detailed spectra of natural rainbows in the visible and near-IR. Thus some uncertainty exists about whether the published spectra and colors differ perceptibly from those in natural rainbows. However, battery-powered imaging spectrometers now make possible direct field measurements of the observed chromaticities and spectra in such bows. These data (1) show consistent spectral and colorimetric patterns along rainbow radii and (2) let one subtract additively mixed background light to reveal the intrinsic colors and spectra produced by rainbow scattering in nature.

  4. Increasingly, drone-based photogrammetry has been used to measure size and body condition changes in marine megafauna. A broad range of platforms, sensors, and altimeters are being applied for these purposes, but there is no unified way to predict photogrammetric uncertainty across this methodological spectrum. As such, it is difficult to make robust comparisons across studies, disrupting collaborations amongst researchers using platforms with varying levels of measurement accuracy. Here we built off previous studies quantifying uncertainty and used an experimental approach to train a Bayesian statistical model using a known-sized object floating at the water’s surface to quantify how measurement error scales with altitude for several different drones equipped with different cameras, focal length lenses, and altimeters. We then applied the fitted model to predict the length distributions and estimate age classes of unknown-sized humpback whales Megaptera novaeangliae , as well as to predict the population-level morphological relationship between rostrum to blowhole distance and total body length of Antarctic minke whales Balaenoptera bonaerensis . This statistical framework jointly estimates errors from altitude and length measurements from multiple observations and accounts for altitudes measured with both barometers and laser altimeters while incorporating errors specific to each. This Bayesian model outputs a posteriormore »predictive distribution of measurement uncertainty around length measurements and allows for the construction of highest posterior density intervals to define measurement uncertainty, which allows one to make probabilistic statements and stronger inferences pertaining to morphometric features critical for understanding life history patterns and potential impacts from anthropogenically altered habitats.« less
  5. Across a plethora of social situations, we touch others in natural and intuitive ways to share thoughts and emotions, such as tapping to get one’s attention or caressing to soothe one’s anxiety. A deeper understanding of these human-to-human interactions will require, in part, the precise measurement of skin-to-skin physical contact. Among prior efforts, each measurement approach exhibits certain constraints, e.g., motion trackers do not capture the precise shape of skin surfaces, while pressure sensors impede skin-to-skin contact. In contrast, this work develops an interference-free 3D visual tracking system using a depth camera to measure the contact attributes between the bare hand of a toucher and the forearm of a receiver. The toucher’s hand is tracked as a posed and positioned mesh by fitting a hand model to detected 3D hand joints, whereas a receiver’s forearm is extracted as a 3D surface updated upon repeated skin contact. Based on a contact model involving point clouds, the spatiotemporal changes of hand-to-forearm contact are decomposed as six, high-resolution, time-series contact attributes, i.e., contact area, indentation depth, absolute velocity, and three orthogonal velocity components, together with contact duration. To examine the system’s capabilities and limitations, two types of experiments were performed. First, to evaluatemore »its ability to discern human touches, one person delivered cued social messages, e.g., happiness, anger, sympathy, to another person using their preferred gestures. The results indicated that messages and gestures, as well as the identities of the touchers, were readily discerned from their contact attributes. Second, the system’s spatiotemporal accuracy was validated against measurements from independent devices, including an electromagnetic motion tracker, sensorized pressure mat, and laser displacement sensor. While validated here in the context of social communication, this system is extendable to human touch interactions such as maternal care of infants and massage therapy.« less