Holographic displays are an upcoming technology for AR and VR applications, with the ability to show 3D content with accurate depth cues, including accommodation and motion parallax. Recent research reveals that only a fraction of holographic pixels are needed to display images with high fidelity, improving energy efficiency in future holographic displays. However, the existing iterative method for computing sparse amplitude and phase layouts does not run in real time; instead, it takes hundreds of milliseconds to render an image into a sparse hologram. In this paper, we present a non-iterative amplitude and phase computation for sparse Fourier holograms that uses Perlin noise in the image–plane phase. We conduct simulated and optical experiments. Compared to the Gaussian-weighted Gerchberg–Saxton method, our method achieves a run time improvement of over 600 times while producing a nearly equal PSNR and SSIM quality. The real-time performance of our method enables the presentation of dynamic content crucial to AR and VR applications, such as video streaming and interactive visualization, on holographic displays.
more »
« less
Invited Paper: Edge-based Provisioning of Holographic Content for Contextual and Personalized Augmented Reality
Mobile augmented reality (AR) has been attracting considerable attention from industry and academia due to its potential to provide vibrant immersive experiences that seamlessly blend physical and virtual worlds. In this paper we focus on creating contextual and personalized AR experiences via edge-based on-demand provisioning of holographic content most appropriate for the conditions and/or most matching user interests. We present edge-based hologram provisioning and pre-provisioning frameworks we developed for Google ARCore and Magic Leap One AR experiences, and describe open challenges and research directions associated with this approach to holographic content storage and transfer. The code we have developed for this paper is available online.
more »
« less
- PAR ID:
- 10192318
- Date Published:
- Journal Name:
- IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for mobile AR is still elusive. In this paper, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency. CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the `spatial-temporal' correlation among mobile AR users to improve recognition accuracy. We implement CollabAR on four different commodity devices, and evaluate its performance on two multi-view image datasets. Our evaluation demonstrates that CollabAR achieves over 96% recognition accuracy for images with severe distortions, while reducing the end-to-end system latency to as low as 17.8ms for commodity mobile devices.more » « less
-
Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for “in the wild” mobile AR is still elusive. In this article, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency. CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the spatial-temporal correlation among mobile AR users to improve recognition accuracy. Moreover, as it is difficult to collect a large-scale image distortion dataset, we propose a Cycle-Consistent Generative Adversarial Network-based data augmentation method to synthesize realistic image distortion. Our evaluation demonstrates that CollabAR achieves over 85% recognition accuracy for “in the wild” images with severe distortions, while reducing the end-to-end system latency to as low as 18.2 ms.more » « less
-
Puig Puig, Anna and (Ed.)Motivated by the potential of holographic augmented reality (AR) to offer an immersive 3D appreciation of morphology and anatomy, the purpose of this work is to develop and assess an interface for image-based planning of prostate interventions with a head-mounted display (HMD). The computational system is a data and command pipeline that links a magnetic resonance imaging (MRI) scanner/data and the operator, that includes modules dedicated to image processing and segmentation, structure rendering, trajectory planning and spatial co-registration. The interface was developed with the Unity3D Engine (C#) and deployed and tested on a HoloLens HMD. For ergonomics in the surgical suite, the system was endowed with hands-free interactive manipulation of images and the holographic scene via hand gestures and voice commands. The system was tested in silico using MRI and ultrasound datasets of prostate phantoms. The holographic AR scene rendered by the HoloLens HMD was subjectively found superior to desktop-based volume or 3D rendering with regard to structure detection and appreciation of spatial relationships, planning access paths and manual co-registration of MRI and Ultrasound. By inspecting the virtual trajectory superimposed to rendered structures and MR images, the operator observes collisions of the needle path with vital structures (e.g. urethra) and adjusts accordingly. Holographic AR interfacing with wireless HMD endowed with hands-free gesture and voice control is a promising technology. Studies need to systematically assess the clinical merit of such systems and needed functionalities.more » « less
-
Dawood, Nashwan; Rahimian, Farzad P.; Seyedzadeh, Saleh; Sheikhkhoshkar, Moslem (Ed.)The growth in the adoption of sensing technologies in the construction industry has triggered the need for graduating construction engineering students equipped with the necessary skills for deploying the technologies. One obstacle to equipping students with these skills is the limited opportunities for hands-on learning experiences on construction sites. Inspired by opportunities offered by mixed reality, this paper presents the development of a holographic learning environment that can afford learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. The interactive holographic learning environment is built upon the notions of competence-based and constructivist learning. The learning contents of the holographic learning environment are driven by characteristics of technical competencies identified from the results of an online survey, and content analysis of industry case studies. This paper presents a competency characteristics model depicting the key sensing technologies, applications and resources needed to facilitate the design of the holographic learning environment. A demonstrative scenario of the application of a virtual laser scanner for measuring volume of stockpiles is utilized to showcase the potential of the learning environment. A taxonomic model of the operational characteristics of the virtual laser scanner represented within the holographic learning environment is also presented. This paper contributes to the body of knowledge by advancing immersive experiential learning discourses previously confined by technology. It opens a new avenue for both researchers and practitioners to further investigate the opportunities offered by mixed reality for future workforce development.more » « less
An official website of the United States government

