skip to main content


Title: ProcessAR: An augmented reality-based tool to create in-situ procedural 2D/3D AR Instructions
Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create interactive AR content. We developed ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter expert’s (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer vision within the workspace when the author looks at them. We explored additional features such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR.  more » « less
Award ID(s):
1839971
NSF-PAR ID:
10297576
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
Designing Interactive Systems Conference 2021 (DIS ’21)
Page Range / eLocation ID:
234 to 249
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Augmented reality (AR) interfaces increasingly utilize artificial intelligence systems to tailor content and experiences to the user. We explore the effects of one such system — a recommender system for online shopping — which allows customers to view personalized product recommendations in the physical spaces where they might be used. We describe results of a [Formula: see text] condition exploratory study in which recommendation quality was varied across three user interface types. Our results highlight potential differences in user perception of the recommended objects in an AR environment. Specifically, users rate product recommendations significantly higher in AR and in a 3D browser interface, and show a significant increase in trust in the recommender system, compared to a web interface with 2D product images. Through semi-structured interviews, we gather participant feedback which suggests AR interfaces perform better due to their ability to view products within the physical context where they will be used. 
    more » « less
  2. Augmented/Virtual reality and video-based media play a vital role in the digital learning revolution to train novices in spatial tasks. However, creating content for these different media requires expertise in several fields. We present EditAR, a unified authoring, and editing environment to create content for AR, VR, and video based on a single demonstration. EditAR captures the user’s interaction within an environment and creates a digital twin, enabling users without programming backgrounds to develop content. We conducted formative interviews with both subject and media experts to design the system. The prototype was developed and reviewed by experts. We also performed a user study comparing traditional video creation with 2D video creation from 3D recordings, via a 3D editor, which uses freehand interaction for in-headset editing. Users took 5 times less time to record instructions and preferred EditAR, along with giving significantly higher usability scores. 
    more » « less
  3. By allowing people to manipulate digital content placed in the real world, Augmented Reality (AR) provides immersive and enriched experiences in a variety of domains. Despite its increasing popularity, providing a seamless AR experience under bandwidth fluctuations is still a challenge, since delivering these experiences at photorealistic quality with minimal latency requires high bandwidth. Streaming approaches have already been proposed to solve this problem, but they require accurate prediction of the Field-Of-View of the user to only stream those regions of scene that are most likely to be watched by the user. To solve this prediction problem, we study in this paper the watching behavior of users exploring different types of AR scenes via mobile devices. To this end, we introduce the ACE Dataset, the first dataset collecting movement data of 50 users exploring 5 different AR scenes. We also propose a four-feature taxonomy for AR scene design, which allows categorizing different types of AR scenes in a methodical way, and supporting further research in this domain. Motivated by the ACE dataset analysis results, we develop a novel user visual attention prediction algorithm that jointly utilizes information of users' historical movements and digital objects positions in the AR scene. The evaluation on the ACE Dataset show the proposed approach outperforms baseline approaches under prediction horizons of variable lengths, and can therefore be beneficial to the AR ecosystem in terms of bandwidth reduction and improved quality of users' experience. 
    more » « less
  4. While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR. 
    more » « less
  5. Many researchers and industry professionals believe Augmented Reality (AR) to be the next step in personal computing. However, the idea of an always-on context-aware AR device presents new and unique challenges to the way users organize multiple streams of information. What does multitasking look like and when should applications be tied to specific elements in the environment? In this exploratory study, we look at one such element: physical objects, and explore an object-centric approach to multitasking in AR. We developed 3 prototype applications that operate on a subset of objects in a simulated test environment. We performed a pilot study of our multitasking solution with a novice user, domain expert, and system expert to develop insights into the future of AR application design. 
    more » « less