We describe an improvement to the recently developed view independent rendering (VIR), and apply it to dynamic cube-mapped reflections. Standard multiview rendering (MVR) renders a scene six times for each cube map. VIR traverses the geometry once per frame to generate a point cloud optimized to many cube maps, using it to render reflected views in parallel. Our improvement, eye-resolution point rendering (EPR), is faster than VIR and makes cube maps faster than MVR, with comparable visual quality. We are currently improving EPR’s run time by reducing point cloud size and per-point processing.
more »
« less
Eye-Based Point Rendering for Dynamic Multiview Effects
Eye-based point rendering (EPR) can make multiview effects much more practical by adding eye (camera) buffer resolution efficiencies to improved view-independent rendering (iVIR). We demonstrate this very successfully by applying EPR to dynamic cube-mapped reflections, sometimes achieving nearly 7× speedups over iVIR and traditional multiview rendering (MVR), with nearly equivalent quality. Our application to omnidirectional soft shadows is less successful, demonstrating that EPR is most effective with larger shader loads and tight eye buffer to off-screen (render target) buffer mappings. This is due to EPR's eye buffer resolution constraints limiting points and shading calculations to the sampling rate of the eye's viewport. In a 2.48 million triangle scene with 50 reflective objects (using 300 off-screen views), EPR renders environment maps with a 49.40ms average frame time on an NVIDIA 1080 Ti GPU. In doing so, EPR generates up to 5x fewer points than iVIR, and regularly performs 50× fewer shading calculations than MVR.
more »
« less
- Award ID(s):
- 2008590
- PAR ID:
- 10467587
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Computer Graphics and Interactive Techniques
- Volume:
- 6
- Issue:
- 1
- ISSN:
- 2577-6193
- Page Range / eLocation ID:
- 1 to 16
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Rendering for light field displays (LFDs) requires rendering of dozens or hundreds of views, which must then be combined into a single image on the display, making real-time LFD rendering extremely difficult. We introduce light field display point rendering (LFDPR), which meets these challenges by improving eye-based point rendering [Gavane and Watson 2023] with texture-based splatting, which avoids oversampling of triangles mapped to only a few texels; and with LFD-biased sampling, which adjusts horizontal and vertical triangle sampling to match the sampling of the LFD itself. To improve image quality, we introduce multiview mipmapping, which reduces texture aliasing even though compute shaders do not support hardware mipmapping. We also introduce angular supersampling and reconstruction to combat LFD view aliasing and crosstalk. The resulting LFDPR is 2-8x times faster than multiview rendering, with similar comparable quality.more » « less
-
Yang, Yin and (Ed.)This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR's (iVIR's) soft shadows are nearly identical in quality to VIR's and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR's omnidirectional shadow results are still better, often nearly twice as fast as VIR's, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering.more » « less
-
Whenever the concept of high-performance cloth simulation is brought up, GPU acceleration is almost always the first that comes to mind. Leveraging immense parallelization, GPU algorithms have demonstrated significant success recently, whereas CPU methods are somewhat overlooked. Indeed, the need for an efficient CPU simulator is evident and pressing. In many scenarios, high-end GPUs may be unavailable or are already allocated to other tasks, such as rendering and shading. A high-performance CPU alternative can greatly boost the overall system capability and user experience. Inspired by this demand, this paper proposes a CPU algorithm for high-resolution cloth simulation. By partitioning the garment model into multiple (but not massive) sub-meshes or domains, we assign per-domain computations to individual CPU processors. Borrowing the idea of projective dynamics that breaks the computation into global and local steps, our key contribution is a new parallelization paradigm at domains for both global and local steps so that domain-level calculations are sequential and lightweight. The CPU has much fewer processing units than a GPU. Our algorithm mitigates this disadvantage by wisely balancing the scale of the parallelization and convergence. We validate our method in a wide range of simulation problems involving high-resolution garment models. Performance-wise, our method is at least one order faster than existing CPU methods, and it delivers a similar performance compared with the state-of-the-art GPU algorithms in many examples, but without using a GPU.more » « less
-
Abstract Recently, deep learning‐based denoising approaches have led to dramatic improvements in low sample‐count Monte Carlo rendering. These approaches are aimed at path tracing, which is not ideal for simulating challenging light transport effects like caustics, where photon mapping is the method of choice. However, photon mapping requires very large numbers of traced photons to achieve high‐quality reconstructions. In this paper, we develop the first deep learning‐based method for particle‐based rendering, and specifically focus on photon density estimation, the core of all particle‐based methods. We train a novel deep neural network to predict a kernel function to aggregate photon contributions at shading points. Our network encodes individual photons into per‐photon features, aggregates them in the neighborhood of a shading point to construct a photon local context vector, and infers a kernel function from the per‐photon and photon local context features. This network is easy to incorporate in many previous photon mapping methods (by simply swapping the kernel density estimator) and can produce high‐quality reconstructions of complex global illumination effects like caustics with an order of magnitude fewer photons compared to previous photon mapping methods. Our approach largely reduces the required number of photons, significantly advancing the computational efficiency in photon mapping.more » « less
An official website of the United States government

