skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Neural 3D holography: learning accurate wave propagation models for 3D holographic virtual and augmented reality displays
Holographic near-eye displays promise unprecedented capabilities for virtual and augmented reality (VR/AR) systems. The image quality achieved by current holographic displays, however, is limited by the wave propagation models used to simulate the physical optics. We propose a neural network-parameterized plane-to-multiplane wave propagation model that closes the gap between physics and simulation. Our model is automatically trained using camera feedback and it outperforms related techniques in 2D plane-to-plane settings by a large margin. Moreover, it is the first network-parameterized model to naturally extend to 3D settings, enabling high-quality 3D computer-generated holography using a novel phase regularization strategy of the complex-valued wave field. The efficacy of our approach is demonstrated through extensive experimental evaluation with both VR and optical see-through AR display prototypes.  more » « less
Award ID(s):
1839974
PAR ID:
10353643
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Graphics
Volume:
40
Issue:
6
ISSN:
0730-0301
Page Range / eLocation ID:
1 to 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Holographic displays are an upcoming technology for AR and VR applications, with the ability to show 3D content with accurate depth cues, including accommodation and motion parallax. Recent research reveals that only a fraction of holographic pixels are needed to display images with high fidelity, improving energy efficiency in future holographic displays. However, the existing iterative method for computing sparse amplitude and phase layouts does not run in real time; instead, it takes hundreds of milliseconds to render an image into a sparse hologram. In this paper, we present a non-iterative amplitude and phase computation for sparse Fourier holograms that uses Perlin noise in the image–plane phase. We conduct simulated and optical experiments. Compared to the Gaussian-weighted Gerchberg–Saxton method, our method achieves a run time improvement of over 600 times while producing a nearly equal PSNR and SSIM quality. The real-time performance of our method enables the presentation of dynamic content crucial to AR and VR applications, such as video streaming and interactive visualization, on holographic displays. 
    more » « less
  2. Computer-generated holography (CGH) holds transformative potential for a wide range of applications, including direct-view, virtual and augmented reality, and automotive display systems. While research on holographic displays has recently made impressive progress, image quality and eye safety of holographic displays are fundamentally limited by the speckle introduced by coherent light sources. Here, we develop an approach to CGH using partially coherent sources. For this purpose, we devise a wave propagation model for partially coherent light that is demonstrated in conjunction with a camera-in-the-loop calibration strategy. We evaluate this algorithm using light-emitting diodes (LEDs) and superluminescent LEDs (SLEDs) and demonstrate improved speckle characteristics of the resulting holograms compared with coherent lasers. SLEDs in particular are demonstrated to be promising light sources for holographic display applications, because of their potential to generate sharp and high-contrast two-dimensional (2D) and 3D images that are bright, eye safe, and almost free of speckle. 
    more » « less
  3. Computer-generated holography (CGH) simulates the propagation and interference of complex light waves, allowing it to reconstruct realistic images captured from a specific viewpoint by solving the corresponding Maxwell equations. However, in applications such as virtual and augmented reality, viewers should freely observe holograms from arbitrary viewpoints, much as how we naturally see the physical world. In this work, we train a neural network to generate holograms at any view in a scene. Our result is the Neural Holographic Field: the first artificial-neural-network-based representation for light wave propagation in free space and transform sparse 2D photos into holograms that are not only 3D but also freely viewable from any perspective. We demonstrate by visualizing various smartphone-captured scenes from arbitrary six-degree-of-freedom viewpoints on a prototype holographic display. To this end, we encode the measured light intensity from photos into a neural network representation of underlying wavefields. Our method implicitly learns the amplitude and phase surrogates of the underlying incoherent light waves under coherent light display conditions. During playback, the learned model predicts the underlying continuous complex wavefront propagating to arbitrary views to generate holograms. 
    more » « less
  4. Implicit Neural Representations (INRs) are a learning-based approach to accelerate Magnetic Resonance Imaging (MRI) acquisitions, particularly in scan-specific settings when only data from the under-sampled scan itself are available. Previous work has shown that INRs improve rapid MRI through inherent regularization imposed by neural network architectures. Typically parameterized by fully connected neural networks, INRs provide continuous image representations by mapping a physical coordinate location to its intensity. Prior approaches have applied unlearned regularization priors during INR training and were limited to 2D or low-resolution 3D acquisitions. Meanwhile, diffusion-based generative models have recently gained attention for learning powerful image priors independent of the measurement model. This work proposes INFusion, a technique that regularizes INR optimization from under-sampled MR measurements using pre-trained diffusion models to enhance reconstruction quality. In addition, a hybrid 3D approach is introduced, enabling INR application on large-scale 3D MR datasets. Experimental results show that in 2D settings, diffusion regularization improves INR training, while in 3D, it enables feasible INR training on matrix sizes of 256 × 256 × 80. 
    more » « less
  5. Holography is a promising avenue for high-quality displays without requiring bulky, complex optical systems. While recent work has demonstrated accurate hologram generation of 2D scenes, high-quality holographic projections of 3D scenes has been out of reach until now. Existing multiplane 3D holography approaches fail to model wavefronts in the presence of partial occlusion while holographic stereogram methods have to make a fundamental tradeoff between spatial and angular resolution. In addition, existing 3D holographic display methods rely on heuristic encoding of complex amplitude into phase-only pixels which results in holograms with severe artifacts. Fundamental limitations of the input representation, wavefront modeling, and optimization methods prohibit artifact-free 3D holographic projections in today’s displays. To lift these limitations, we introduce hogel-free holography which optimizes for true 3D holograms, supporting both depth- and view-dependent effects for the first time. Our approach overcomes the fundamental spatio-angular resolution tradeoff typical to stereogram approaches. Moreover, it avoids heuristic encoding schemes to achieve high image fidelity over a 3D volume. We validate that the proposed method achieves 10 dB PSNR improvement on simulated holographic reconstructions. We also validate our approach on an experimental prototype with accurate parallax and depth focus effects. 
    more » « less