skip to main content


Title: PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting
We present PhySG, an end-to-end inverse rendering pipeline that includes a fully differentiable renderer, and can reconstruct geometry, materials, and illumination from scratch from a set of images. Our framework represents specular BRDFs and environmental illumination using mix- tures of spherical Gaussians, and represents geometry as a signed distance function parameterized as a Multi-Layer Perceptron. The use of spherical Gaussians allows us to efficiently solve for approximate light transport, and our method works on scenes with challenging non-Lambertian reflectance captured under natural, static illumination. We demonstrate, with both synthetic and real data, that our re- constructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.  more » « less
Award ID(s):
1900783
NSF-PAR ID:
10286156
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE Conference on Computer Vision and Pattern Recognition
ISSN:
2163-6648
Page Range / eLocation ID:
5453-5462
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Inverse rendering is a powerful approach to modeling objects from photographs, and we extend previous techniques to handle translucent materials that exhibit subsurface scattering. Representing translucency using a heterogeneous bidirectional scattering-surface reflectance distribution function (BSSRDF), we extend the framework of path-space differentiable rendering to accommodate both surface and subsurface reflection. This introduces new types of paths requiring new methods for sampling moving discontinuities in material space that arise from visibility and moving geometry. We use this differentiable rendering method in an end-to-end approach that jointly recovers heterogeneous translucent materials (represented by a BSSRDF) and detailed geometry of an object (represented by a mesh) from a sparse set of measured 2D images in a coarse-to-fine framework incorporating Laplacian preconditioning for the geometry. To efficiently optimize our models in the presence of the Monte Carlo noise introduced by the BSSRDF integral, we introduce a dual-buffer method for evaluating the L2 image loss. This efficiently avoids potential bias in gradient estimation due to the correlation of estimates for image pixels and their derivatives and enables correct convergence of the optimizer even when using low sample counts in the renderer. We validate our derivatives by comparing against finite differences and demonstrate the effectiveness of our technique by comparing inverse-rendering performance with previous methods. We show superior reconstruction quality on a set of synthetic and real-world translucent objects as compared to previous methods that model only surface reflection. 
    more » « less
  2. Abstract

    Precomputed Radiance Transfer (PRT) remains an attractive solution for real‐time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real‐time. However, practical PRT methods are usually limited to low‐frequency spherical harmonic lighting. All‐frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural‐wavelet PRT solution to high‐frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi‐layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real‐time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view‐dependent reflections and even caustics.

     
    more » « less
  3. We introduce Doppler time-of-flight (D-ToF) rendering, an extension of ToF rendering for dynamic scenes, with applications in simulating D-ToF cameras. D-ToF cameras use high-frequency modulation of illumination and exposure, and measure the Doppler frequency shift to compute the radial velocity of dynamic objects. The time-varying scene geometry and high-frequency modulation functions used in such cameras make it challenging to accurately and efficiently simulate their measurements with existing ToF rendering algorithms. We overcome these challenges in a twofold manner: To achieve accuracy, we derive path integral expressions for D-ToF measurements under global illumination and form unbiased Monte Carlo estimates of these integrals. To achieve efficiency, we develop a tailored time-path sampling technique that combines antithetic time sampling with correlated path sampling. We show experimentally that our sampling technique achieves up to two orders of magnitude lower variance compared to naive time-path sampling. We provide an open-source simulator that serves as a digital twin for D-ToF imaging systems, allowing imaging researchers, for the first time, to investigate the impact of modulation functions, material properties, and global illumination on D-ToF imaging performance.

     
    more » « less
  4. Reconstructing 3D objects in natural environments requires solving the ill-posed problem of geometry, spatially-varying material, and lighting estimation. As such, many approaches impractically constrain to a dark environment, use controlled lighting rigs, or use few handheld captures but suffer reduced quality. We develop a method that uses just two smartphone exposures captured in ambient lighting to reconstruct appearance more accurately and practically than baseline methods. Our insight is that we can use a flash/no-flash RGB-D pair to pose an inverse rendering problem using point lighting. This allows efficient differentiable rendering to optimize depth and normals from a good initialization and so also the simultaneous optimization of diffuse environment illumination and SVBRDF material. We find that this reduces diffuse albedo error by 25%, specular error by 46%, and normal error by 30% against single and paired-image baselines that use learning-based techniques. Given that our approach is practical for everyday solid objects, we enable photorealistic relighting for mobile photography and easier content creation for augmented reality. 
    more » « less
  5. This research investigates the design of structurally performant, lightweight architectural elements produced through concrete 3D printing (C3DP). Traditionally, concrete requires dense and sturdy formwork, whose production adds significantly to the total cost and results in massive and heavy parts after demolding. C3DP offers the unique opportunity to both eliminate the need for formwork and to create lighter parts by introducing internal voids and cavities. The advent of additive manufacturing in a broad range of scales, materials, industries, and applications, led to increased interest and intense research into different types of porous structures, their geometry, and structural performance under various boundary conditions. Precise control over the sparse distribution of material allows not only for parts with similar strength at reduced mass but even for modifications of mechanical properties, like turning brittle materials into elastic or shock-absorbent ones. While with powder-based additive manufacturing processes like metal 3D printing, truss-based lattices have become very popular for the light-weighting of parts or to provide tissue growth scaffolds for medical implants, their geometry – a sparse space frame resulting in numerous individual contour islands and accentuated overhangs – cannot as easily be produced by C3DP, which is based on a continuous material extrusion. Alternative types of micro-structures, so-called triply periodic minimal surfaces (TPMS), are better suited for this process as they are, as their name suggests, consisting of one continuous surface dividing space into two separate but interwoven subspaces. TPMS are therefore very popular for the efficient design of heat exchangers. We develop and present a continuous and integrated workflow, in which the architectural elements and their structural requirements are designed through transitioning back and forth between the force and the form diagram using 3D graphic statics [1]. The members and their topology from the abstract graph of the conceptual form diagram are seamlessly connected to the volumetric modeling (VM) framework, responsible for the definition of the part geometry [2]. VM represents form assigned distance functions (SDF) and can easily handle complex topologies and flawless Boolean operations of not only the outer shell geometry but also the internal micro-structural infill patterns (Fig. 1, a). In an iterative feedback loop, the infill can be further optimized to leave the material only along certain internal stress trajectories (force flows). This functional grading controlling the relative density is done based on the FE analysis results. The stress distribution is thereby defined as a three-dimensional field (Fig. 1, b). Its values can factor into the SDF equation and be used to modify the wavelength (periodicity) of the TPMS, the local thickness of the surface shell, the solid to void fraction by shifting the threshold iso-value or even the alignment and orientation of the unit cells (Fig. 1, c). They can be arranged in an orthogonal, polar- or even spherical coordinate system to optimally adapt to structural necessities. The TPMS pattern can also gradually transition from one type into another type along the gradient of a spatial function. 
    more » « less