skip to main content


Title: Towards Eyeglass-style Holographic Near-eye Displays with Static Expanded Eyebox
Holography is perhaps the only method demonstrated so far that can achieve a wide field of view (FOV) and a compact eyeglass-style form factor for augmented reality (AR) near-eye displays (NEDs). Unfortunately, the eyebox of such NEDs is impractically small ($\sim \lt$ 1 mm). In this paper, we introduce and demonstrate a design for holographic NEDs with a practical, wide eyebox of $\sim$ 10 mm and without any moving parts, based on holographic lenslets. In our design, a holographic optical element (HOE) based on a lenslet array was fabricated as the image combiner with expanded eyebox. A phase spatial light modulator (SLM) alters the phase of the incident laser light projected onto the HOE combiner such that the virtual image can be perceived at different focus distances, which can reduce the vergence-accommodation conflict (VAC). We have successfully implemented a bench-top prototype following the proposed design. The experimental results show effective eyebox expansion to a size of $\sim$ 10 mm. With further work, we hope that these design concepts can be incorporated into eyeglass-size NEDs.  more » « less
Award ID(s):
1840131
NSF-PAR ID:
10301960
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
Page Range / eLocation ID:
312 to 319
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Holography can offer unique solutions to the specific problems faced by automotive optical systems. Frequently, when possibilities have been exhausted using refractive and refractive designs, diffraction can come to the rescue by opening a new dimension to explore. Holographic optical elements (HOEs), for example, are thin film optics that can advantageously replace lenses, prisms, or mirrors. Head up display (HUD) and LIDAR for autonomous vehicles are two of the systems where our group have used HOEs to provide original answers to the limitations of classical optic. With HUD, HOEs address the problems of the limited field of view, and small eye box usually found in projection systems. Our approach is to recycle the light multiple times inside a waveguide so the combiner can be as large as the entire windshield. In this system, a hologram is used to inject a small image at one end of a waveguide, and another hologram is used to extract the image several times, providing an expanded eye box. In the case of LIDAR systems, non-mechanical beam scanning based on diffractive spatial light modulator (SLM), are only able to achieve an angular range of few degrees. We used multiplexed volume holograms (VH) to amplify the initial diffraction angle from the SLM to achieve up to 4π steradian coverage in a compact form factor. 
    more » « less
  2. Abstract

    Deep learning-based image reconstruction methods have achieved remarkable success in phase recovery and holographic imaging. However, the generalization of their image reconstruction performance to new types of samples never seen by the network remains a challenge. Here we introduce a deep learning framework, termed Fourier Imager Network (FIN), that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples, exhibiting unprecedented success in external generalization. FIN architecture is based on spatial Fourier transform modules that process the spatial frequencies of its inputs using learnable filters and a global receptive field. Compared with existing convolutional deep neural networks used for hologram reconstruction, FIN exhibits superior generalization to new types of samples, while also being much faster in its image inference speed, completing the hologram reconstruction task in ~0.04 s per 1 mm2of the sample area. We experimentally validated the performance of FIN by training it using human lung tissue samples and blindly testing it on human prostate, salivary gland tissue and Pap smear samples, proving its superior external generalization and image reconstruction speed. Beyond holographic microscopy and quantitative phase imaging, FIN and the underlying neural network architecture might open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.

     
    more » « less
  3. We report a novel lensless on-chip microscopy platform based on near-field blind ptychographic modulation. In this platform, we place a thin diffuser in between the object and the image sensor for light wave modulation. By blindly scanning the unknown diffuser to different x – y positions, we acquire a sequence of modulated intensity images for quantitative object recovery. Different from previous ptychographic implementations, we employ a unit magnification configuration with a Fresnel number of ∼50 000, which is orders of magnitude higher than those of previous ptychographic setups. The unit magnification configuration allows us to have the entire sensor area, 6.4 mm by 4.6 mm, as the imaging field of view. The ultra-high Fresnel number enables us to directly recover the positional shift of the diffuser in the phase retrieval process, addressing the positioning accuracy issue plaguing regular ptychographic experiments. In our implementation, we use a low-cost, DIY scanning stage to perform blind diffuser modulation. Precise mechanical scanning that is critical in conventional ptychography experiments is no longer needed in our setup. We further employ an up-sampling phase retrieval scheme to bypass the resolution limit set by the imager pixel size and demonstrate a half-pitch resolution of 0.78 μm. We validate the imaging performance via in vitro cell cultures, transparent and stained tissue sections, and a thick biological sample. We show that the recovered quantitative phase map can be used to perform effective cell segmentation of a dense yeast culture. We also demonstrate 3D digital refocusing of the thick biological sample based on the recovered wavefront. The reported platform provides a cost-effective and turnkey solution for large field-of-view, high-resolution, and quantitative on-chip microscopy. It is adaptable for a wide range of point-of-care-, global-health-, and telemedicine-related applications. 
    more » « less
  4. Abstract The distortions of absorption line profiles caused by photospheric brightness variations on the surfaces of cool, main-sequence stars can mimic or overwhelm radial velocity (RV) shifts due to the presence of exoplanets. The latest generation of precision RV spectrographs aims to detect velocity amplitudes ≲ 10 cm s −1 , but requires mitigation of stellar signals. Statistical techniques are being developed to differentiate between Keplerian and activity-related velocity perturbations. Two important challenges, however, are the interpretability of the stellar activity component as RV models become more sophisticated, and ensuring the lowest-amplitude Keplerian signatures are not inadvertently accounted for in flexible models of stellar activity. For the K2V exoplanet host ϵ Eridani, we separately used ground-based photometry to constrain Gaussian processes for modeling RVs and TESS photometry with a light-curve inversion algorithm to reconstruct the stellar surface. From the reconstructions of TESS photometry, we produced an activity model that reduced the rms scatter in RVs obtained with EXPRES from 4.72 to 1.98 m s −1 . We present a pilot study using the CHARA Array and MIRC-X beam combiner to directly image the starspots seen in the TESS photometry. With the limited phase coverage, our spot detections are marginal with current data but a future dedicated observing campaign should allow for imaging, as well as allow the stellar inclination and orientation with respect to the debris disk to be definitively determined. This work shows that stellar surface maps obtained with high-cadence, time-series photometric and interferometric data can provide the constraints needed to accurately reduce RV scatter. 
    more » « less
  5. Mérand, Antoine ; Sallum, Stephanie ; Sanchez-Bermudez, Joel (Ed.)
    The Michigan Young STar Imager at CHARA (MYSTIC) is a K-band interferometric beam combining instrument funded by the United States National Science Foundation, designed primarily for imaging sub-au scale disk structures around nearby young stars and to probe the planet formation process. Installed at the CHARA array in July 2021, with baselines up to 331 meters, MYSTIC provides a maximum angular resolution of λ/2B ∼ 0.7 mas. The instrument injects phase corrected light from the array into inexpensive, single-mode, polarization maintaining silica fibers, which are then passed via a vacuum feedthrough into a cryogenic dewar operating at 220 K for imaging. MYSTIC utilizes a high frame rate, ultra-low read noise SAPHIRA detector, and implements two beam combiners: a 6-telescope image plane beam combiner, based on the MIRC-X design, for targets as faint as 7.7 Kmag, as well as a 4-telescope integrated optic beam-combiner mode using a spare chip leftover from the GRAVITY instrument. MYSTIC is co-phased with the MIRC-X (J+H band) instrument for simultaneous fringe-tracking and imaging, and shares its software suite with the latter to allow a single observer to operate both instruments. Herein, we present the instrument design, review its operational performance, present early commissioning science observations, and propose upgrades to the instrument that could improve its K-band sensitivity to 10th magnitude in the near future. 
    more » « less