skip to main content


Title: Deep Separation of Direct and Global Components from a Single Photograph under Structured Lighting
Abstract

We present a deep learning based solution for separating the direct and global light transport components from a single photograph captured under high frequency structured lighting with a co‐axial projector‐camera setup. We employ an architecture with one encoder and two decoders that shares information between the encoder and the decoders, as well as between both decoders to ensure a consistent decomposition between both light transport components. Furthermore, our deep learning separation approach does not require binary structured illumination, allowing us to utilize the full resolution capabilities of the projector. Consequently, our deep separation network is able to achieve high fidelity decompositions for lighting frequency sensitive features such as subsurface scattering and specular reflections. We evaluate and demonstrate our direct and global separation method on a wide variety of synthetic and captured scenes.

 
more » « less
Award ID(s):
1909028
NSF-PAR ID:
10202877
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Computer Graphics Forum
Volume:
39
Issue:
7
ISSN:
0167-7055
Page Range / eLocation ID:
p. 459-470
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Precomputed Radiance Transfer (PRT) remains an attractive solution for real‐time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real‐time. However, practical PRT methods are usually limited to low‐frequency spherical harmonic lighting. All‐frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural‐wavelet PRT solution to high‐frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi‐layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real‐time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view‐dependent reflections and even caustics.

     
    more » « less
  2. This paper presents an absolute phase unwrapping method for high-speed three-dimensional (3D) shape measurement. This method uses three phase-shifted patterns and one binary random pattern on a single-camera, single-projector structured light system. We calculate the wrapped phase from phase-shifted images and determine the coarse correspondence through the digital image correlation (DIC) between the captured binary random pattern of the object and the pre-captured binary random pattern of a flat surface. We then developed a computational framework to determine fringe order number pixel by pixel using the coarse correspondence information. Since only one additional pattern is used, the proposed method can be used for high-speed 3D shape measurement. Experimental results successfully demonstrated that the proposed method can achieve high-speed and high-quality measurement of complex scenes.

     
    more » « less
  3. Structured light illumination is an active three-dimensional scanning technique that uses a projector and camera pair to project and capture a series of stripe patterns; however, with a single camera and single projector, structured light scanning has issues associated with scan occlusions, multi-path, and weak signal reflections. To address these issues, this paper proposes dual-projector scanning using a range of projector/camera arrangements. Unlike previous attempts at dual-projector scanning, the proposed scanner drives both light engines simultaneously, using temporal-frequency multiplexing to computationally decouple the projected patterns. Besides presenting the details of how such a system is built, we also present experimental results demonstrating how multiple projectors can be used to (1) minimize occlusions; (2) achieve higher signal-to-noise ratios having twice a single projector’s brightness; (3) reduce the number of component video frames required for a scan; and (4) detect multi-path interference.

     
    more » « less
  4. Currently, many critical care indices are repetitively assessed and recorded by overburdened nurses, e.g. physical function or facial pain expressions of nonverbal patients. In addition, many essential information on patients and their environment are not captured at all, or are captured in a non-granular manner, e.g. sleep disturbance factors such as bright light, loud background noise, or excessive visitations. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring of critically ill patients and their environment in the Intensive Care Unit (ICU). As an exemplar prevalent condition, we also characterized delirious and non-delirious patients and their environment. We used wearable sensors, light and sound sensors, and a high-resolution camera to collected data on patients and their environment. We analyzed collected data using deep learning and statistical analysis. Our system performed face detection, face recognition, facial action unit detection, head pose detection, facial expression recognition, posture recognition, actigraphy analysis, sound pressure and light level detection, and visitation frequency detection. We were able to detect patient's face (Mean average precision (mAP)=0.94), recognize patient's face (mAP=0.80), and their postures (F1=0.94). We also found that all facial expressions, 11 activity features, visitation frequency during the day, visitation frequency during the night, light levels, and sound pressure levels during the night were significantly different between delirious and non-delirious patients (p-value<0.05). In summary, we showed that granular and autonomous monitoring of critically ill patients and their environment is feasible and can be used for characterizing critical care conditions and related environment factors. 
    more » « less
  5. Abstract

    Biogeographical classifications of the global ocean generalize spatiotemporal trends in species or biomass distributions across discrete ocean biomes or provinces. These classifications are generally based on a combination of remote‐sensed proxies of phytoplankton biomass and global climatologies of biogeochemical or physical parameters. However, these approaches are limited in their capacity to account for subsurface variability in these parameters. The deployment of autonomous profiling floats in the Biogeochemical Argo network over the last decade has greatly increased global coverage of subsurface measurements of bio‐optical proxies for phytoplankton biomass and physiology. In this study, we used empirical orthogonal function analysis to identify the main components of variability in a global data set of 422 annual time series of Chlorophyllafluorescence and optical backscatter profiles. Applying cluster analysis to these results, we identified six biomes within the global ocean: two high‐latitude biomes capturing summer bloom dynamics in the North Atlantic and Southern Ocean and four mid‐ and low‐latitude biomes characterized by variability in the depth and frequency of deep chlorophyll maximum formation. We report the distribution of these biomes along with associated trends in biogeochemical and physicochemical environmental parameters. Our results demonstrate light and nutrients to explain most variability in phytoplankton distributions for all biomes, while highlighting a global inverse relationship between particle stocks in the euphotic zone and transfer efficiency into the mesopelagic zone. In addition to partitioning seasonal variability in vertical phytoplankton distributions at the global scale, our results provide a potentially novel biogeographical classification of the global ocean.

     
    more » « less