skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2008590

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The latest light-field displays have improved greatly, but continue to be based on the approximate pinhole model. For every frame, our real-time technique evaluates a full optical model, and then renders an image predistorted at the sub-pixel level to the current pixel-to-eye light flow, reducing cross-talk and increasing viewing angle. 
    more » « less
    Free, publicly-accessible full text available May 5, 2026
  2. Since their creation, displays have used the top-to-bottom raster scan. In today's interactive applications, this scan is a liability, forcing users to choose between complete frames with synchronization delay; or "torn" frames without this delay. We propose a stochastic scan that enables low-latency, unsynchronized display without tearing. We also discuss an interactive display simulator that allows us to investigate the effects of stochastic and other scans on interaction and imagery. 
    more » « less
    Free, publicly-accessible full text available May 5, 2026
  3. Displays are being used for increasingly interactive applications including gaming, video conferencing, and perhaps most demanding, esports. We review the display needs of esports, and describe how current displays fail to meet them by using a high-latency I/O pipeline. We conclude with research directions that move away from this pipeline and better meet interactive user needs. 
    more » « less
    Free, publicly-accessible full text available December 5, 2025
  4. Current light‐field displays increase resolution and reduce cross‐talk with head tracking, despite using simple lens models. With a more complete model, our real‐time technique uses GPUs to analyze the current frame's light flow at subpixel precision, and to render a matching image that further improves resolution and cross‐talk. 
    more » « less
  5. Rendering for light field displays (LFDs) requires rendering of dozens or hundreds of views, which must then be combined into a single image on the display, making real-time LFD rendering extremely difficult. We introduce light field display point rendering (LFDPR), which meets these challenges by improving eye-based point rendering [Gavane and Watson 2023] with texture-based splatting, which avoids oversampling of triangles mapped to only a few texels; and with LFD-biased sampling, which adjusts horizontal and vertical triangle sampling to match the sampling of the LFD itself. To improve image quality, we introduce multiview mipmapping, which reduces texture aliasing even though compute shaders do not support hardware mipmapping. We also introduce angular supersampling and reconstruction to combat LFD view aliasing and crosstalk. The resulting LFDPR is 2-8x times faster than multiview rendering, with similar comparable quality. 
    more » « less
  6. Video conferencing has become a central part of our daily lives, thanks to the COVID-19 pandemic. Unfortunately, so have its many limitations, resulting in poor support for communicative and social behavior and ultimately, “zoom fatigue.” New technologies will be required to address these limitations, including many drawn from mixed reality (XR). In this paper, our goals are to equip and encourage future researchers to develop and test such technologies. Toward this end, we first survey research on the shortcomings of video conferencing systems, as defined before and after the pandemic. We then consider the methods that research uses to evaluate support for communicative behavior, and argue that those same methods should be employed in identifying, improving, and validating promising video conferencing technologies. Next, we survey emerging XR solutions to video conferencing's limitations, most of which do not employ head-mounted displays. We conclude by identifying several opportunities for video conferencing research in a post-pandemic, hybrid working environment. 
    more » « less
  7. Media technology is continuing its transition from passive streaming to participatory interactive experiences, including well‐known applications such as web browsing, video conferencing and gaming, as well as emerging and more demanding uses like AR/MR/VR and esports. How should display traits such as latency, refresh rate and size change to meet this trend? We review recent studies from NVIDIA Research and others on requirements for esports as the cutting edge of this trend toward interactivity, and discuss the studies’ implications for other interactive applications. 
    more » « less
  8. Eye-based point rendering (EPR) can make multiview effects much more practical by adding eye (camera) buffer resolution efficiencies to improved view-independent rendering (iVIR). We demonstrate this very successfully by applying EPR to dynamic cube-mapped reflections, sometimes achieving nearly 7× speedups over iVIR and traditional multiview rendering (MVR), with nearly equivalent quality. Our application to omnidirectional soft shadows is less successful, demonstrating that EPR is most effective with larger shader loads and tight eye buffer to off-screen (render target) buffer mappings. This is due to EPR's eye buffer resolution constraints limiting points and shading calculations to the sampling rate of the eye's viewport. In a 2.48 million triangle scene with 50 reflective objects (using 300 off-screen views), EPR renders environment maps with a 49.40ms average frame time on an NVIDIA 1080 Ti GPU. In doing so, EPR generates up to 5x fewer points than iVIR, and regularly performs 50× fewer shading calculations than MVR. 
    more » « less
  9. Yang, Yin and (Ed.)
    This paper describes improvements to view independent rendering (VIR) that make it much more useful for multiview effects. Improved VIR's (iVIR's) soft shadows are nearly identical in quality to VIR's and produced with comparable speed (several times faster than multipass rendering), even when using a simpler bufferless implementation that does not risk overflow. iVIR's omnidirectional shadow results are still better, often nearly twice as fast as VIR's, even when bufferless. Most impressively, iVIR enables complex environment mapping in real time, producing high-quality reflections up to an order of magnitude faster than VIR, and 2-4 times faster than multipass rendering. 
    more » « less
  10. We describe an improvement to the recently developed view independent rendering (VIR), and apply it to dynamic cube-mapped reflections. Standard multiview rendering (MVR) renders a scene six times for each cube map. VIR traverses the geometry once per frame to generate a point cloud optimized to many cube maps, using it to render reflected views in parallel. Our improvement, eye-resolution point rendering (EPR), is faster than VIR and makes cube maps faster than MVR, with comparable visual quality. We are currently improving EPR’s run time by reducing point cloud size and per-point processing. 
    more » « less