skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting
Abstract We present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co‐axial projector‐camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.  more » « less
Award ID(s):
1909028
PAR ID:
10202877
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
39
Issue:
7
ISSN:
0167-7055
Page Range / eLocation ID:
p. 459-470
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Precomputed Radiance Transfer (PRT) remains an attractive solution for real‐time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real‐time. However, practical PRT methods are usually limited to low‐frequency spherical harmonic lighting. All‐frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural‐wavelet PRT solution to high‐frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi‐layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real‐time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view‐dependent reflections and even caustics. 
    more » « less
  2. Light-transport represents the complex interactions of light in a scene. Fast, compressed, and accurate light-transport capture for dynamic scenes is an open challenge in vision and graphics. In this paper, we integrate the classical idea of Lissajous sampling with novel control strategies fordynamic light-transport applicationssuch as relighting water drops and seeing around corners. In particular, this paper introduces an improved Lissajous projector hardware design and discusses calibration and capture for a microelectromechanical (MEMS) mirror-based projector. Further, we show progress towards speeding up the hardware-based Lissajous subsampling for dual light transport frames, and investigate interpolation algorithms for recovering back the missing data. Our captured dynamic light transport results show complex light scattering effects for dense angular sampling, and we also show dual non-line-of-sight (NLoS) capture of dynamic scenes. This work is the first step towards adaptive Lissajous control for dynamic light-transport. 
    more » « less
  3. Lighting consumes  10% of the global electricity. White laser lighting, utilizing either direct color mixing or phosphor-conversion, can potentially boost the efficiency well beyond existing light emitting diodes (LEDs) especially at high current density. Here we present a compact, universal packaging platform for both laser lighting schemes, which is simultaneously scalable to wavelength division multiplexing for visible light communication. Using commercially available laser diodes and optical components, low-speckle contrast ≤ 5% and uniform illumination is achieved by multi-stage scattering and photon recycling through a mixing rod and static diffusers in a truncated-pyramidal reflective cavity. We demonstrate a high luminous efficacy of 274 lm/Wofor phosphor-converted laser lighting and 150 lm/Wofor direct red-green-blue laser mixing, respectively, in reference to the input optical power. In the former case, the luminous efficacy achieved for practical lighting is even higher than most of the previous reports measured using integrating spheres. In the latter case of direct laser color mixing, to our best knowledge, this is the first time to achieve a luminous efficacy approaching their phosphor-conversion counterparts in a compact package applicable to practical lighting. With future improvement of blue laser diode efficiency and development of yellow/amber/orange laser diodes, we envision that this universal white laser package can potentially achieve a luminous efficacy > 275 lm/Wein reference to the input electrical power, ~ 1.5x higher than state-of-the-art LED lighting and exceeding the target of 249 lm/Wefor 2035. 
    more » « less
  4. This paper presents an absolute phase unwrapping method for high-speed three-dimensional (3D) shape measurement. This method uses three phase-shifted patterns and one binary random pattern on a single-camera, single-projector structured light system. We calculate the wrapped phase from phase-shifted images and determine the coarse correspondence through the digital image correlation (DIC) between the captured binary random pattern of the object and the pre-captured binary random pattern of a flat surface. We then developed a computational framework to determine fringe order number pixel by pixel using the coarse correspondence information. Since only one additional pattern is used, the proposed method can be used for high-speed 3D shape measurement. Experimental results successfully demonstrated that the proposed method can achieve high-speed and high-quality measurement of complex scenes. 
    more » « less
  5. Full surround 3D imaging for shape acquisition is essential for generating digital replicas of real-world objects. Surrounding an object we seek to scan with a kaleidoscope, that is, a configuration of multiple planar mirrors, produces an image of the object that encodes information from a combinatorially large number of virtual viewpoints. This information is practically useful for the full surround 3D reconstruction of the object, but cannot be used directly, as we do not know what virtual viewpoint each image pixel corresponds---the pixel label. We introduce a structured light system that combines a projector and a camera with a kaleidoscope. We then prove that we can accurately determine the labels of projector and camera pixels, for arbitrary kaleidoscope configurations, using the projector-camera epipolar geometry. We use this result to show that our system can serve as a multi-view structured light system with hundreds of virtual projectors and cameras. This makes our system capable of scanning complex shapes precisely and with full coverage. We demonstrate the advantages of the kaleidoscopic structured light system by scanning objects that exhibit a large range of shapes and reflectances. 
    more » « less