We describe an improvement to the recently developed view independent rendering (VIR), and apply it to dynamic cube-mapped reflections. Standard multiview rendering (MVR) renders a scene six times for each cube map. VIR traverses the geometry once per frame to generate a point cloud optimized to many cube maps, using it to render reflected views in parallel. Our improvement, eye-resolution point rendering (EPR), is faster than VIR and makes cube maps faster than MVR, with comparable visual quality. We are currently improving EPR’s run time by reducing point cloud size and per-point processing.
more »
« less
Eye-Based Point Rendering for Dynamic Multiview Effects
Eye-based point rendering (EPR) can make multiview effects much more practical by adding eye (camera) buffer resolution efficiencies to improved view-independent rendering (iVIR). We demonstrate this very successfully by applying EPR to dynamic cube-mapped reflections, sometimes achieving nearly 7× speedups over iVIR and traditional multiview rendering (MVR), with nearly equivalent quality. Our application to omnidirectional soft shadows is less successful, demonstrating that EPR is most effective with larger shader loads and tight eye buffer to off-screen (render target) buffer mappings. This is due to EPR's eye buffer resolution constraints limiting points and shading calculations to the sampling rate of the eye's viewport. In a 2.48 million triangle scene with 50 reflective objects (using 300 off-screen views), EPR renders environment maps with a 49.40ms average frame time on an NVIDIA 1080 Ti GPU. In doing so, EPR generates up to 5x fewer points than iVIR, and regularly performs 50× fewer shading calculations than MVR.
more »
« less
- Award ID(s):
- 2008590
- PAR ID:
- 10467587
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Computer Graphics and Interactive Techniques
- Volume:
- 6
- Issue:
- 1
- ISSN:
- 2577-6193
- Page Range / eLocation ID:
- 1 to 16
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Rendering for light field displays (LFDs) requires rendering of dozens or hundreds of views, which must then be combined into a single image on the display, making real-time LFD rendering extremely difficult. We introduce light field display point rendering (LFDPR), which meets these challenges by improving eye-based point rendering [Gavane and Watson 2023] with texture-based splatting, which avoids oversampling of triangles mapped to only a few texels; and with LFD-biased sampling, which adjusts horizontal and vertical triangle sampling to match the sampling of the LFD itself. To improve image quality, we introduce multiview mipmapping, which reduces texture aliasing even though compute shaders do not support hardware mipmapping. We also introduce angular supersampling and reconstruction to combat LFD view aliasing and crosstalk. The resulting LFDPR is 2-8x times faster than multiview rendering, with similar comparable quality.more » « less
-
Yang, Yin and (Ed.)This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR's (iVIR's) soft shadows are nearly identical in quality to VIR's and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR's omnidirectional shadow results are still better, often nearly twice as fast as VIR's, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering.more » « less
-
Abstract Recently, deep learning‐based denoising approaches have led to dramatic improvements in low sample‐count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high‐quality reconstructions. In this paper, we develop the first deep learning‐based method for particle‐based rendering, and specifically focus on photon density estimation, the core of all particle‐based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per‐photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per‐photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high‐quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.more » « less
-
We propose a real-time path guiding method, Voxel Path Guiding (VXPG), that significantly improves fitting efficiency under limited sampling budget. Our key idea is to use a spatial irradiance voxel data structure across all shading points to guide the location of path vertices. For each frame, we first populate the voxel data structure with irradiance and geometry information. To sample from the data structure for a shading point, we need to select a voxel with high contribution to that point. To importance sample the voxels while taking visibility into consideration, we adapt techniques from offline many-lights rendering by clustering pairs of shading points and voxels. Finally, we unbiasedly sample within the selected voxel while taking the geometry inside into consideration. Our experiments show that VXPG achieves significantly lower perceptual error compared to other real-time path guiding and virtual point light methods under equal-time comparison. Furthermore, our method does not rely on temporal information, but can be used together with other temporal reuse sampling techniques such as ReSTIR to further improve sampling efficiency.more » « less
An official website of the United States government

