Abstract Monte Carlo rendering of translucent objects with heterogeneous scattering properties is often expensive both in terms of memory and computation. If the scattering properties are described by a 3D texture, memory consumption is high. If we do path tracing and use a high dynamic range lighting environment, the computational cost of the rendering can easily become significant. We propose a compact and efficient neural method for representing and rendering the appearance of heterogeneous translucent objects. Instead of assuming only surface variation of optical properties, our method represents the appearance of a full object taking its geometry and volumetric heterogeneities into account. This is similar to a neural radiance field, but our representation works for an arbitrary distant lighting environment. In a sense, we present a version of neural precomputed radiance transfer that captures relighting of heterogeneous translucent objects. We use a multi‐layer perceptron (MLP) with skip connections to represent the appearance of an object as a function of spatial position, direction of observation, and direction of incidence. The latter is considered a directional light incident across the entire non‐self‐shadowed part of the object. We demonstrate the ability of our method to compactly store highly complex materials while having high accuracy when comparing to reference images of the represented object in unseen lighting environments. As compared with path tracing of a heterogeneous light scattering volume behind a refractive interface, our method more easily enables importance sampling of the directions of incidence and can be integrated into existing rendering frameworks while achieving interactive frame rates.
more »
« less
Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
Efficiently rendering direct lighting from millions of dynamic light sources using Monte Carlo integration remains a challenging problem, even for off-line rendering systems. We introduce a new algorithm—ReSTIR—that renders such lighting interactively, at high quality, and without needing to maintain complex data structures. We repeatedly resample a set of candidate light samples and apply further spatial and temporal resampling to leverage information from relevant nearby samples. We derive an unbiased Monte Carlo estimator for this approach, and show that it achieves equal-error 6×-60× faster than state-of-the-art methods. A biased estimator reduces noise further and is 35×-65× faster, at the cost of some energy loss. We implemented our approach on the GPU, rendering complex scenes containing up to 3.4 million dynamic, emissive triangles in under 50 ms per frame while tracing at most 8 rays per pixel.
more »
« less
- Award ID(s):
- 1844538
- PAR ID:
- 10172299
- Date Published:
- Journal Name:
- ACM transactions on graphics
- Volume:
- 39
- Issue:
- 4
- ISSN:
- 0730-0301
- Page Range / eLocation ID:
- 148
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Recently, deep learning‐based denoising approaches have led to dramatic improvements in low sample‐count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high‐quality reconstructions. In this paper, we develop the first deep learning‐based method for particle‐based rendering, and specifically focus on photon density estimation, the core of all particle‐based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per‐photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per‐photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high‐quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.more » « less
-
In recent years, reservoir-based spatiotemporal importance resampling (ReSTIR) algorithms appeared out of nowhere to take parts of the realtime rendering community by storm, with sample reuse speeding direct lighting from millions of dynamic lights [1], diffuse multi-bounce lighting [2], participating media [3], and even complex global illumination paths [4]. Highly optimized variants (e.g. [5]) can give 100x efficiency improvement over traditional ray- and path-tracing methods; this is key to achieve 30 or 60 Hz framerates. In production engines, tracing even one ray or path per pixel may only be feasible on the highest-end systems, so maximizing image quality per sample is vital. ReSTIR builds on the math in Talbot et al.'s [6] resampled importance sampling (RIS), which previously was not widely used or taught, leaving many practitioners missing key intuitions and theoretical grounding. A firm grounding is vital, as seemingly obvious "optimizations" arising during ReSTIR engine integration can silently introduce conditional probabilities and dependencies that, left ignored, add uncontrollable bias to the results. In this course, we plan to: 1. Provide concrete motivation and intuition for why ReSTIR works, where it applies, what assumptions it makes, and the limitations of today's theory and implementations; 2. Gently develop the theory, targeting attendees with basic Monte Carlo sampling experience but without prior knowledge of resampling algorithms (e.g., Talbot et al. [6]); 3. Give explicit algorithmic samples and pseudocode, pointing out easily-encountered pitfalls when implementing ReSTIR; 4. Discuss actual game integrations, highlighting the gotchas, challenges, and corner cases we encountered along the way, and highlighting ReSTIR's practical benefits.more » « less
-
Abstract To increase diversity and realism, surface bidirectional scattering distribution functions (BSDFs) are often modelled as consisting of multiple layers, but accurately evaluating layered BSDFs while accounting for all light transport paths is a challenging problem. Recently, Guoet al. [GHZ18] proposed an accurate and general position‐free Monte Carlo method, but this method introduces variance that leads to longer render time compared to non‐stochastic layered models. We improve the previous work by presenting two new sampling strategies,pair‐product samplingandmultiple‐product sampling. Our new methods better take advantage of the layered structure and reduce variance compared to the conventional approach of sequentially sampling one BSDF at a time. Ourpair‐product samplingstrategy importance samples the product of two BSDFs from a pair of adjacent layers. We further generalize this tomultiple‐product sampling, which importance samples the product of a chain of three or more BSDFs. In order to compute these products, we developed a new approximate Gaussian representation of individual layer BSDFs. This representation incorporates spatially varying material properties as parameters so that our techniques can support an arbitrary number of textured layers. Compared to previous Monte Carlo layering approaches, our results demonstrate substantial variance reduction in rendering isotropic layered surfaces.more » « less
-
This 3 hour course provides a detailed overview of grid-free Monte Carlo methods for solving partial differential equations (PDEs) based on the walk on spheres (WoS) algorithm, with a special emphasis on problems with high geometric complexity. PDEs are a basic building block of models and algorithms used throughout science, engineering and visual computing. Yet despite decades of research, conventional PDE solvers struggle to capture the immense geometric complexity of the natural world. A perennial challenge is spatial discretization: traditionally, one must partition the domain into a high-quality volumetric mesh—a process that can be brittle, memory intensive, and difficult to parallelize. WoS makes a radical departure from this approach, by reformulating the problem in terms of recursive integral equations that can be estimated using the Monte Carlo method, eliminating the need for spatial discretization. Since these equations strongly resemble those found in light transport theory, one can leverage deep knowledge from Monte Carlo rendering to develop new PDE solvers that share many of its advantages: no meshing, trivial parallelism, and the ability to evaluate the solution at any point without solving a global system of equations. The course is divided into two parts. Part I will cover the basics of using WoS to solve fundamental PDEs like the Poisson equation. Topics include formulating the solution as an integral equation, generating samples via recursive random walks, and employing accelerated distance and ray intersection queries to efficiently handle complex geometries. Participants will also gain experience setting up demo applications involving data interpolation, heat transfer, and geometric optimization using the open-source “Zombie” library, which implements various grid-free Monte Carlo PDE solvers. Part II will feature a mini-panel of academic and industry contributors covering advanced topics including variance reduction, differentiable and multi-physics simulation, and applications in industrial design and robust geometry processing.more » « less
An official website of the United States government

