Holographic displays promise to deliver unprecedented display capabilities in augmented reality applications, featuring a wide field of view, wide color gamut, spatial resolution, and depth cues all in a compact form factor. While emerging holographic display approaches have been successful in achieving large étendue and high image quality as seen by a camera, the large étendue also reveals a problem that makes existing displays impractical: the sampling of the holographic field by the eye pupil. Existing methods have not investigated this issue due to the lack of displays with large enough étendue, and, as such, they suffer from severe artifacts with varying eye pupil size and location. We show that the holographic field as sampled by the eye pupil is highly varying for existing display setups, and we propose pupil-aware holography that maximizes the perceptual image quality irrespective of the size, location, and orientation of the eye pupil in a near-eye holographic display. We validate the proposed approach both in simulations and on a prototype holographic display and show that our method eliminates severe artifacts and significantly outperforms existing approaches.
more »
« less
Membrane AR: varifocal, wide field of view augmented reality display from deformable membranes
Accommodative depth cues, a wide field of view, and ever-higher resolutions present major design challenges for near-eye displays. Optimizing a design to overcome one of them typically leads to a trade-off in the others. We tackle this problem by introducing an all-in-one solution - a novel display for augmented reality. The key components of our solution are two see-through, varifocal deformable membrane mirrors reflecting a display. They are controlled by airtight cavities and change the effective focal power to present a virtual image at a target depth plane. The benefits of the membranes include a wide field of view and fast depth switching.
more »
« less
- Award ID(s):
- 1645463
- PAR ID:
- 10076685
- Date Published:
- Journal Name:
- Proceeding SIGGRAPH '17 ACM SIGGRAPH 2017 Emerging Technologies
- Page Range / eLocation ID:
- 1 to 2
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We develop optical calibration and distortion correction for a recently developed DMD-based volumetric aug- mented reality display. The display is capable of displaying imagery over a large volume — composed of 280 depth planes over a large depth-range (15 cm to 400 cm) and 40 degrees field-of-view. An unintended property of this display is that the field-of-view of the depth planes changes slightly over depth. This can cause distortions, perceptual errors for the perspective depth cue, and reduce the image quality slightly. To address these issues, we develop an optical calibration method and a distortion correction as a post-processing step to our rendering pipeline.more » « less
-
Deformable beamsplitters have been shown as a means of creating a wide field of view, varifocal, optical see- through, augmented reality display. Current systems suffer from degraded optical quality at far focus and are tethered to large air compressors or pneumatic devices which prevent small, self-contained systems. We present an analysis on the shape of the curved beamsplitter as it deforms to different focal depths. Our design also demonstrates a step forward in reducing the form factor of the overall system.more » « less
-
A lens performs an approximately one-to-one mapping from the object to the image plane. This mapping in the image plane is maintained within a depth of field (or referred to as depth of focus, if the object is at infinity). This necessitates refocusing of the lens when the images are separated by distances larger than the depth of field. Such refocusing mechanisms can increase the cost, complexity, and weight of imaging systems. Here we show that by judicious design of a multi-level diffractive lens (MDL) it is possible to drastically enhance the depth of focus by over 4 orders of magnitude. Using such a lens, we are able to maintain focus for objects that are separated by as large a distance as in our experiments. Specifically, when illuminated by collimated light at , the MDL produced a beam, which remained in focus from 5 to 1200 mm. The measured full width at half-maximum of the focused beam varied from 6.6 µm (5 mm away from the MDL) to 524 µm (1200 mm away from the MDL). Since the side lobes were well suppressed and the main lobe was close to the diffraction limit, imaging with a horizontal × vertical field of view of over the entire focal range was possible. This demonstration opens up a new direction for lens design, where by treating the phase in the focal plane as a free parameter, extreme-depth-of-focus imaging becomes possible.more » « less
-
Battery life is an increasingly urgent challenge for today's untethered VR and AR devices. However, the power efficiency of head-mounted displays is naturally at odds with growing computational requirements driven by better resolution, refresh rate, and dynamic ranges, all of which reduce the sustained usage time of untethered AR/VR devices. For instance, the Oculus Quest 2, under a fully-charged battery, can sustain only 2 to 3 hours of operation time. Prior display power reduction techniques mostly target smartphone displays. Directly applying smartphone display power reduction techniques, however, degrades the visual perception in AR/VR with noticeable artifacts. For instance, the "power-saving mode" on smartphones uniformly lowers the pixel luminance across the display and, as a result, presents an overall darkened visual perception to users if directly applied to VR content. Our key insight is that VR display power reduction must be cognizant of the gaze-contingent nature of high field-of-view VR displays. To that end, we present a gaze-contingent system that, without degrading luminance, minimizes the display power consumption while preserving high visual fidelity when users actively view immersive video sequences. This is enabled by constructing 1) a gaze-contingent color discrimination model through psychophysical studies, and 2) a display power model (with respect to pixel color) through real-device measurements. Critically, due to the careful design decisions made in constructing the two models, our algorithm is cast as a constrained optimization problem with a closed-form solution, which can be implemented as a real-time, image-space shader. We evaluate our system using a series of psychophysical studies and large-scale analyses on natural images. Experiment results show that our system reduces the display power by as much as 24% (14% on average) with little to no perceptual fidelity degradation.more » « less
An official website of the United States government

