skip to main content


Title: ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality
Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an integrated workflow enabling designers to author semantically adaptive AR experiences in Virtual Reality (VR). First, potential AR consumers collect local scenes with a semantic understanding technique. ScalAR then synthesizes numerous similar scenes. In VR, a designer authors the AR contents’ semantic associations and validates the design while being immersed in the provided scenes. We adopt a decision-tree-based algorithm to fit the designer’s demonstrations as a semantic adaptation model to deploy the authored AR experience in a physical scene. We further showcase two application scenarios authored by ScalAR and conduct a two-session user study where the quantitative results prove the accuracy of the AR content rendering and the qualitative results show the usability of ScalAR.  more » « less
Award ID(s):
1839971
NSF-PAR ID:
10396710
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
CHI Conference on Human Factors in Computing Systems
Page Range / eLocation ID:
1 to 18
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for mobile AR is still elusive. In this paper, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency. CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the `spatial-temporal' correlation among mobile AR users to improve recognition accuracy. We implement CollabAR on four different commodity devices, and evaluate its performance on two multi-view image datasets. Our evaluation demonstrates that CollabAR achieves over 96% recognition accuracy for images with severe distortions, while reducing the end-to-end system latency to as low as 17.8ms for commodity mobile devices. 
    more » « less
  2. Mobile Augmented Reality (AR), which overlays digital content on the real-world scenes surrounding a user, is bringing immersive interactive experiences where the real and virtual worlds are tightly coupled. To enable seamless and precise AR experiences, an image recognition system that can accurately recognize the object in the camera view with low system latency is required. However, due to the pervasiveness and severity of image distortions, an effective and robust image recognition solution for “in the wild” mobile AR is still elusive. In this article, we present CollabAR, an edge-assisted system that provides distortion-tolerant image recognition for mobile AR with imperceptible system latency. CollabAR incorporates both distortion-tolerant and collaborative image recognition modules in its design. The former enables distortion-adaptive image recognition to improve the robustness against image distortions, while the latter exploits the spatial-temporal correlation among mobile AR users to improve recognition accuracy. Moreover, as it is difficult to collect a large-scale image distortion dataset, we propose a Cycle-Consistent Generative Adversarial Network-based data augmentation method to synthesize realistic image distortion. Our evaluation demonstrates that CollabAR achieves over 85% recognition accuracy for “in the wild” images with severe distortions, while reducing the end-to-end system latency to as low as 18.2 ms. 
    more » « less
  3. Immersive Learning Environments (ILEs) developed in Virtual and Augmented Reality (VR/AR) are a novel pro- fessional training platform. An ILE can facilitate an Adaptive Learning System (ALS), which has proven beneficial to the learning process. However, there is no existing AI-ready ILE that facilitates collecting multimedia multimodal data from the environment and users for training AI models, nor allows for the learning contents and complex learning process to be dynamically adapted by an ALS. This paper proposes a novel multimedia system in VR/AR to dynamically build ILEs for a wide range of use-cases, based on a description language for the generalizable ILE structure. It will detail users’ paths and conditions for completing learning activities, and a content adaptation algorithm to update the ILE at runtime. Human and AI systems can customize the environment based on user learning metrics. Results show that this framework is efficient and low- overhead, suggesting a path to simplifying and democratizing the ILE development without introducing bloat. Index Terms—virtual reality, augmented reality, content generation, immersive learning, 3D environments 
    more » « less
  4. This literature review examines the existing research into cybersickness reduction with regards to head mounted display use. Cybersickness refers to a collection of negative symptoms sometimes experienced as the result of being immersed in a virtual environment, such as nausea, dizziness, or eye strain. These symptoms can prevent individuals from utilizing virtual reality (VR) technologies, so discovering new methods of reducing them is critical. Our objective in this literature review is to provide a better picture of what cybersickness reduction techniques exist, the quantity of research demonstrating their effectiveness, and the virtual scenes testing has taken place in. This will help to direct researches towards promising avenues, and illuminate gaps in the literature. Following the preferred reporting items for systematic reviews and meta-analyses statement, we obtained a batch of 1,055 papers through the use of software aids. We selected 88 papers that examine potential cybersickness reduction approaches. Our acceptance criteria required that papers examined malleable conditions that could be conceivably modified for everyday use, examined techniques in conjunction with head mounted displays, and compared cybersickness levels between two or more user conditions. These papers were sorted into categories based on their general approach to combating cybersickness, and labeled based on the presence of statistically significant results, the use of virtual vehicles, the level of visual realism, and the virtual scene contents used in evaluation of their effectiveness. In doing this we have created a snapshot of the literature to date so that researchers may better understand what approaches are being researched, and the types of virtual experiences used in their evaluation. Keywords: Virtual reality cybersickness Simulator Sickness Visually induced motion sickness reduction Systematic review Head mounted display. 
    more » « less
  5. null (Ed.)
    Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks. Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers’ diverse experiences and learning behaviors. We present AdapTutAR, an adaptive task tutoring system that enables experts to record machine task tutorials via embodied demonstration and train learners with different AR tutoring contents adapting to each user’s characteristics. The adaptation is achieved by continually monitoring learners’ tutorial-following status and adjusting the tutoring content on-the-fly and in-situ. The results of our user study evaluation have demonstrated that our adaptive system is more effective and preferable than the non-adaptive one. 
    more » « less