skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Temporally Stable Metropolis Light Transport Denoising using Recurrent Transformer Blocks
Metropolis Light Transport (MLT) is a global illumination algorithm that is well-known for rendering challenging scenes with intricate light paths. However, MLT methods tend to produce unpredictable correlation artifacts in images, which can introduce visual inconsistencies for animation rendering. This drawback also makes it challenging to denoise MLT renderings while maintaining temporal stability. We tackle this issue with modern learning-based methods and build a sequence denoiser combining the recurrent connections with the cutting-edge vision transformer architecture. We demonstrate that our sophisticated denoiser can consistently improve the quality and temporal stability of MLT renderings with difficult light paths. Our method is efficient and scalable for complex scene renderings that require high sample counts.  more » « less
Award ID(s):
2105806
PAR ID:
10605333
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Association for Computing Machinery (ACM)
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
43
Issue:
4
ISSN:
0730-0301
Format(s):
Medium: X Size: p. 1-14
Size(s):
p. 1-14
Sponsoring Org:
National Science Foundation
More Like this
  1. As rendering engines become increasingly important in film and television, with their use in virtual production (VP), some underlying issues become more apparent. This paper aims to investigate how we can improve asset color matching of VP elements with real-life objects found on sets. Experiments were conducted in which objects were exposed to various types of lighting setups, and digital twins were rendered using both RGB methods and spectral methods, with data reduction techniques also employed. The renderings were then filmed, alongside their real-life counterparts. Color difference metrics were used to determine whether spectral rendering and data reduction techniques offered advantages over RGB renderings. The conclusion illustrates that spectral rendering offers advantages, including higher accuracy in rendering the colours of materials. 
    more » « less
  2. Physics-based differentiable rendering is becoming increasingly crucial for tasks in inverse rendering and machine learning pipelines. To address discontinuities caused by geometric boundaries and occlusion, two classes of methods have been proposed: 1) the edge-sampling methods that directly sample light paths at the scene discontinuity boundaries, which require nontrivial data structures and precomputation to select the edges, and 2) the reparameterization methods that avoid discontinuity sampling but are currently limited to hemispherical integrals and unidirectional path tracing. We introduce a new mathematical formulation that enjoys the benefits of both classes of methods. Unlike previous reparameterization work that focused on hemispherical integral, we derive the reparameterization in the path space. As a result, to estimate derivatives using our formulation, we can apply advanced Monte Carlo rendering methods, such as bidirectional path tracing, while avoiding explicit sampling of discontinuity boundaries. We show differentiable rendering and inverse rendering results to demonstrate the effectiveness of our method. 
    more » « less
  3. Efficiently rendering direct lighting from millions of dynamic light sources using Monte Carlo integration remains a challenging problem, even for off-line rendering systems. We introduce a new algorithm—ReSTIR—that renders such lighting interactively, at high quality, and without needing to maintain complex data structures. We repeatedly resample a set of candidate light samples and apply further spatial and temporal resampling to leverage information from relevant nearby samples. We derive an unbiased Monte Carlo estimator for this approach, and show that it achieves equal-error 6×-60× faster than state-of-the-art methods. A biased estimator reduces noise further and is 35×-65× faster, at the cost of some energy loss. We implemented our approach on the GPU, rendering complex scenes containing up to 3.4 million dynamic, emissive triangles in under 50 ms per frame while tracing at most 8 rays per pixel. 
    more » « less
  4. Reconstructing and designing media with continuously-varying refractive index fields remains a challenging problem in computer graphics. A core difficulty in trying to tackle this inverse problem is that light travels inside such media along curves, rather than straight lines. Existing techniques for this problem make strong assumptions on the shape of the ray inside the medium, and thus limit themselves to media where the ray deflection is relatively small. More recently, differentiable rendering techniques have relaxed this limitation, by making it possible to differentiably simulate curved light paths. However, the automatic differentiation algorithms underlying these techniques use large amounts of memory, restricting existing differentiable rendering techniques to relatively small media and low spatial resolutions. We present a method for optimizing refractive index fields that both accounts for curved light paths and has a small, constant memory footprint. We use the adjoint state method to derive a set of equations for computing derivatives with respect to the refractive index field of optimization objectives that are subject to nonlinear ray tracing constraints. We additionally introduce discretization schemes to numerically evaluate these equations, without the need to store nonlinear ray trajectories in memory, significantly reducing the memory requirements of our algorithm. We use our technique to optimize high-resolution refractive index fields for a variety of applications, including creating different types of displays (multiview, lightfield, caustic), designing gradient-index optics, and reconstructing gas flows. 
    more » « less
  5. Larochelle, Hugo; Kamath, Gautam; Hadsell, Raia; Cho, Kyunghyun (Ed.)
    Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding. Recent efforts have tackled unsupervised discovery of object-centric neural scene representations. However, the high cost of ray-marching, exacerbated by the fact that each object representation has to be ray-marched separately, leads to insufficiently sampled radiance fields and thus, noisy renderings, poor framerates, and high memory and time complexity during training and rendering. Here, we propose to represent objects in an object-centric, compositional scene representation as light fields. We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields. Dubbed Compositional Object Light Fields (COLF), our method enables unsupervised learning of object-centric neural scene representations, state-of-the-art reconstruction and novel view synthesis performance on standard datasets, and rendering and training speeds at orders of magnitude faster than existing 3D approaches. 
    more » « less