Precomputed Radiance Transfer (PRT) remains an attractive solution for real‐time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real‐time. However, practical PRT methods are usually limited to low‐frequency spherical harmonic lighting. All‐frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural‐wavelet PRT solution to high‐frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi‐layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real‐time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view‐dependent reflections and even caustics.
We present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co‐axial projector‐camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.
more » « less- Award ID(s):
- 1909028
- NSF-PAR ID:
- 10202877
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Computer Graphics Forum
- Volume:
- 39
- Issue:
- 7
- ISSN:
- 0167-7055
- Page Range / eLocation ID:
- p. 459-470
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Structured light illumination is an active three-dimensional scanning technique that uses a projector and camera pair to project and capture a series of stripe patterns; however, with a single camera and single projector, structured light scanning has issues associated with scan occlusions, multi-path, and weak signal reflections. To address these issues, this paper proposes dual-projector scanning using a range of projector/camera arrangements. Unlike previous attempts at dual-projector scanning, the proposed scanner drives both light engines simultaneously, using temporal-frequency multiplexing to computationally decouple the projected patterns. Besides presenting the details of how such a system is built, we also present experimental results demonstrating how multiple projectors can be used to (1) minimize occlusions; (2) achieve higher signal-to-noise ratios having twice a single projector’s brightness; (3) reduce the number of component video frames required for a scan; and (4) detect multi-path interference.
-
Light-transport represents the complex interactions of light in a scene. Fast, compressed, and accurate light-transport capture for dynamic scenes is an open challenge in vision and graphics. In this paper, we integrate the classical idea of Lissajous sampling with novel control strategies for
dynamic light-transport applications such as relighting water drops and seeing around corners. In particular, this paper introduces an improved Lissajous projector hardware design and discusses calibration and capture for a microelectromechanical (MEMS) mirror-based projector. Further, we show progress towards speeding up the hardware-based Lissajous subsampling for dual light transport frames, and investigate interpolation algorithms for recovering back the missing data. Our captured dynamic light transport results show complex light scattering effects for dense angular sampling, and we also show dual non-line-of-sight (NLoS) capture of dynamic scenes. This work is the first step towards adaptive Lissajous control for dynamic light-transport. -
This paper presents an absolute phase unwrapping method for high-speed three-dimensional (3D) shape measurement. This method uses three phase-shifted patterns and one binary random pattern on a single-camera, single-projector structured light system. We calculate the wrapped phase from phase-shifted images and determine the coarse correspondence through the digital image correlation (DIC) between the captured binary random pattern of the object and the pre-captured binary random pattern of a flat surface. We then developed a computational framework to determine fringe order number pixel by pixel using the coarse correspondence information. Since only one additional pattern is used, the proposed method can be used for high-speed 3D shape measurement. Experimental results successfully demonstrated that the proposed method can achieve high-speed and high-quality measurement of complex scenes.
-
Light transport contains all light information between a light source and an image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-to-noise ratio, and the storage or processing of a large number of measurements. In this Letter, we propose a novel hardware setup that combines a flying-spot micro-electro mechanical system (MEMS) modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular, we achieved depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene using event light transport.