skip to main content


Title: Invited Paper: Edge-based Provisioning of Holographic Content for Contextual and Personalized Augmented Reality
Mobile augmented reality (AR) has been attracting considerable attention from industry and academia due to its potential to provide vibrant immersive experiences that seamlessly blend physical and virtual worlds. In this paper we focus on creating contextual and personalized AR experiences via edge-based on-demand provisioning of holographic content most appropriate for the conditions and/or most matching user interests. We present edge-based hologram provisioning and pre-provisioning frameworks we developed for Google ARCore and Magic Leap One AR experiences, and describe open challenges and research directions associated with this approach to holographic content storage and transfer. The code we have developed for this paper is available online.  more » « less
Award ID(s):
1908051 1903136
NSF-PAR ID:
10192318
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops)
Page Range / eLocation ID:
1 to 6
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for mobile AR is still elusive. In this paper, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency. CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the `spatial-temporal' correlation among mobile AR users to improve recognition accuracy. We implement CollabAR on four different commodity devices, and evaluate its performance on two multi-view image datasets. Our evaluation demonstrates that CollabAR achieves over 96% recognition accuracy for images with severe distortions, while reducing the end-to-end system latency to as low as 17.8ms for commodity mobile devices. 
    more » « less
  2. Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for “in the wild” mobile AR is still elusive. In this article, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency. CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the spatial-temporal correlation among mobile AR users to improve recognition accuracy. Moreover, as it is difficult to collect a large-scale image distortion dataset, we propose a Cycle-Consistent Generative Adversarial Network-based data augmentation method to synthesize realistic image distortion. Our evaluation demonstrates that CollabAR achieves over 85% recognition accuracy for “in the wild” images with severe distortions, while reducing the end-to-end system latency to as low as 18.2 ms. 
    more » « less
  3. By allowing people to manipulate digital content placed in the real world, Augmented Reality (AR) provides immersive and enriched experiences in a variety of domains. Despite its increasing popularity, providing a seamless AR experience under bandwidth fluctuations is still a challenge, since delivering these experiences at photorealistic quality with minimal latency requires high bandwidth. Streaming approaches have already been proposed to solve this problem, but they require accurate prediction of the Field-Of-View of the user to only stream those regions of scene that are most likely to be watched by the user. To solve this prediction problem, we study in this paper the watching behavior of users exploring different types of AR scenes via mobile devices. To this end, we introduce the ACE Dataset, the first dataset collecting movement data of 50 users exploring 5 different AR scenes. We also propose a four-feature taxonomy for AR scene design, which allows categorizing different types of AR scenes in a methodical way, and supporting further research in this domain. Motivated by the ACE dataset analysis results, we develop a novel user visual attention prediction algorithm that jointly utilizes information of users' historical movements and digital objects positions in the AR scene. The evaluation on the ACE Dataset show the proposed approach outperforms baseline approaches under prediction horizons of variable lengths, and can therefore be beneficial to the AR ecosystem in terms of bandwidth reduction and improved quality of users' experience. 
    more » « less
  4. Taking part in creating location-based augmented reality (LBAR) experiences that focus on communication, art and design could serve as an entry point for art-oriented girls and young women towards career pathways in computer science and information communication technology. This conceptual paper presents our theory-based approach and subsequent application, as well as lessons learned informed by team discussions and reflections. We built an LBAR program entitled AR Girls on four foundational principles: stealth science (embedding science in familiar appealing experiences), place-based education (situating learning in one’s own community), non-hierarchical design (collaborations where both adults and youth generate content), and learning through design (engaging in design, not just play). To translate these principles into practice, we centered the program around the theme of art by forming partnerships with small community art organizations and positioning LBAR as an art-based communication medium. We found that LBAR lends itself to an interdisciplinary approach that blends technology, art, science and communication. We believe our approach helped girls make connections to their existing interests and build soft skills such as leadership and interpersonal communication as they designed local environmentally-focused LBAR walking tours. Our “use-modify-create” approach provided first-hand experiences with the AR software early on, and thus supported the girls and their art educators in designing and showcasing their walking tours. Unfortunately, the four foundational principles introduced considerable complexity to AR Girls, which impacted recruitment and retention, and at times overwhelmed the art educators who co-led the program. To position AR Girls for long-term success, we simplified the program approach and implementation, including switching to a more user-friendly AR software; reducing logistical challenges of location-based design and play; narrowing the topic addressed by the girls design; and making the involvement of community partners optional. Overall, our initial work was instrumental in understanding how to translate theoretical considerations for learning in out-of-school settings into an LBAR program aimed at achieving multiple complementary outcomes for participating girls. Ultimately, we achieved better scalability by simplifying AR Girls both conceptually and practically. The lessons learned from AR Girls can inform others using LBAR for education and youth development programming. 
    more » « less
  5. Dawood, Nashwan ; Rahimian, Farzad P. ; Seyedzadeh, Saleh ; Sheikhkhoshkar, Moslem (Ed.)
    The growth in the adoption of sensing technologies in the construction industry has triggered the need for graduating construction engineering students equipped with the necessary skills for deploying the technologies. One obstacle to equipping students with these skills is the limited opportunities for hands-on learning experiences on construction sites. Inspired by opportunities offered by mixed reality, this paper presents the development of a holographic learning environment that can afford learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. The interactive holographic learning environment is built upon the notions of competence-based and constructivist learning. The learning contents of the holographic learning environment are driven by characteristics of technical competencies identified from the results of an online survey, and content analysis of industry case studies. This paper presents a competency characteristics model depicting the key sensing technologies, applications and resources needed to facilitate the design of the holographic learning environment. A demonstrative scenario of the application of a virtual laser scanner for measuring volume of stockpiles is utilized to showcase the potential of the learning environment. A taxonomic model of the operational characteristics of the virtual laser scanner represented within the holographic learning environment is also presented. This paper contributes to the body of knowledge by advancing immersive experiential learning discourses previously confined by technology. It opens a new avenue for both researchers and practitioners to further investigate the opportunities offered by mixed reality for future workforce development. 
    more » « less