skip to main content


Title: Event-based dual photography for transparent scene reconstruction

Light transport contains all light information between a light source and an image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-to-noise ratio, and the storage or processing of a large number of measurements. In this Letter, we propose a novel hardware setup that combines a flying-spot micro-electro mechanical system (MEMS) modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular, we achieved depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene using event light transport.

 
more » « less
Award ID(s):
1909192
NSF-PAR ID:
10399369
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Optical Society of America
Date Published:
Journal Name:
Optics Letters
Volume:
48
Issue:
5
ISSN:
0146-9592; OPLEDP
Format(s):
Medium: X Size: Article No. 1304
Size(s):
Article No. 1304
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose a novel non-line-of-sight (NLOS) imaging framework with long-wave infrared (IR). At long-wave IR wavelengths, certain physical parameters are more favorable for high-fidelity reconstruction. In contrast to prior work in visible light NLOS, at long-wave IR wavelengths, the hidden heat source acts as a light source. This simplifies the problem to a single bounce problem. In addition, surface reflectance has a much stronger specular reflection in the long-wave IR spectrum than in the visible light spectrum. We reformulate a light transport model that leverages these favorable physical properties of long-wave IR. Specifically, we demonstrate 2D shape recovery and 3D localization of a hidden object. Furthermore, we demonstrate near real-time and robust NLOS pose estimation of a human figure, the first such demonstration, to our knowledge. 
    more » « less
  2. Abstract

    Non-Line-Of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight. One major challenge with this technique is the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality. To overcome this obstacle, we introduce a multipixel time-of-flight non-line-of-sight imaging method combining specifically designed Single Photon Avalanche Diode (SPAD) array detectors with a fast reconstruction algorithm that captures and reconstructs live low-latency videos of non-line-of-sight scenes with natural non-retroreflective objects. We develop a model of the signal-to-noise-ratio of non-line-of-sight imaging and use it to devise a method that reconstructs the scene such that signal-to-noise-ratio, motion blur, angular resolution, and depth resolution are all independent of scene depth suggesting that reconstruction of very large scenes may be possible.

     
    more » « less
  3. he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on this observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+. 
    more » « less
  4. null (Ed.)
    Non-line-of-sight (NLOS) imaging is a rapidly advancing technology that provides asymmetric vision: seeing without being seen. Though limited in accuracy, resolution, and depth recovery compared to active methods, the capabilities of passive methods are especially surprising because they typically use only a single, inexpensive digital camera. One of the largest challenges in passive NLOS imaging is ambient background light, which limits the dynamic range of the measurement while carrying no useful information about the hidden part of the scene. In this work we propose a new reconstruction approach that uses an optimized linear transformation to balance the rejection of uninformative light with the retention of informative light, resulting in fast (video-rate) reconstructions of hidden scenes from photographs of a blank wall under high ambient light conditions. 
    more » « less
  5. In this work, we propose a novel approach for high accuracy user localization by merging tools from both millimeter wave (mmWave) imaging and communications. The key idea of the proposed solution is to leverage mmWave imaging to construct a high-resolution 3D image of the line-of-sight (LOS) and non-line-of-sight (NLOS) objects in the environment at one antenna array. Then, uplink pilot signaling with the user is used to estimate the angle-of-arrival and time-of- arrival of the dominant channel paths. By projecting the AoA and ToA information on the 3D mmWave images of the environment, the proposed solution can locate the user with a sub-centimeter accuracy. This approach has several gains. First, it allows accurate simultaneous localization and mapping (SLAM) from a single standpoint, i.e., using only one antenna array. Second, it does not require any prior knowledge of the surrounding environment. Third, it can locate NLOS users, even if their signals experience more than one reflection and without requiring an antenna array at the user. The approach is evaluated using a hardware setup and its ability to provide sub-centimeter localization accuracy is shown 
    more » « less