skip to main content


Title: Designing a Multitasking Interface for Object-aware AR applications
Many researchers and industry professionals believe Augmented Reality (AR) to be the next step in personal computing. However, the idea of an always-on context-aware AR device presents new and unique challenges to the way users organize multiple streams of information. What does multitasking look like and when should applications be tied to specific elements in the environment? In this exploratory study, we look at one such element: physical objects, and explore an object-centric approach to multitasking in AR. We developed 3 prototype applications that operate on a subset of objects in a simulated test environment. We performed a pilot study of our multitasking solution with a novice user, domain expert, and system expert to develop insights into the future of AR application design.  more » « less
Award ID(s):
1845587 1911230
NSF-PAR ID:
10332221
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)
Page Range / eLocation ID:
39 to 40
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Lighting understanding plays an important role in virtual object composition, including mobile augmented reality (AR) applications. Prior work often targets recovering lighting from the physical environment to support photorealistic AR rendering. Because the common workflow is to use a back-facing camera to capture the physical world for overlaying virtual objects, we refer to this usage pattern as back-facing AR. However, existing methods often fall short in supporting emerging front-facing mobile AR applications, e.g., virtual try-on where a user leverages a front-facing camera to explore the effect of various products (e.g., glasses or hats) of different styles. This lack of support can be attributed to the unique challenges of obtaining 360° HDR environment maps, an ideal format of lighting representation, from the front-facing camera and existing techniques. In this paper, we propose to leverage dual-camera streaming to generate a high-quality environment map by combining multi-view lighting reconstruction and parametric directional lighting estimation. Our preliminary results show improved rendering quality using a dual-camera setup for front-facing AR compared to a commercial solution. 
    more » « less
  2. null (Ed.)
    Though virtual reality (VR) has been advanced to certain levels of maturity in recent years, the general public, especially the population of the blind and visually impaired (BVI), still cannot enjoy the benefit provided by VR. Current VR accessibility applications have been developed either on expensive head-mounted displays or with extra accessories and mechanisms, which are either not accessible or inconvenient for BVI individuals. In this paper, we present a mobile VR app that enables BVI users to access a virtual environment on an iPhone in order to build their skills of perception and recognition of the virtual environment and the virtual objects in the environment. The app uses the iPhone on a selfie stick to simulate a long cane in VR, and applies Augmented Reality (AR) techniques to track the iPhone’s real-time poses in an empty space of the real world, which is then synchronized to the long cane in the VR environment. Due to the use of mixed reality (the integration of VR & AR), we call it the Mixed Reality cane (MR Cane), which provides BVI users auditory and vibrotactile feedback whenever the virtual cane comes in contact with objects in VR. Thus, the MR Cane allows BVI individuals to interact with the virtual objects and identify approximate sizes and locations of the objects in the virtual environment. We performed preliminary user studies with blind-folded participants to investigate the effectiveness of the proposed mobile approach and the results indicate that the proposed MR Cane could be effective to help BVI individuals in understanding the interaction with virtual objects and exploring 3D virtual environments. The MR Cane concept can be extended to new applications of navigation, training and entertainment for BVI individuals without more significant efforts. 
    more » « less
  3. Augmented Reality (AR) technology offers the possibility of experiencing virtual images with physical objects and provides high quality hands-on experiences in an engineering lab environment. However, students still need help navigating the educational content in AR environments due to a mismatch problem between computer-generated 3D images and actual physical objects. This limitation could significantly influence their learning processes and workload in AR learning. In addition, a lack of student awareness of their learning process in AR environments could negatively impact their performance improvement. To overcome those challenges, we introduced a virtual instructor in each AR module and asked a metacognitive question to improve students’ metacognitive skills. The results showed that student workload was significantly reduced when a virtual instructor guided students during AR learning. Also, there is a significant correlation between student learning performance and workload when they are overconfident. The outcome of this study will provide knowledge to improve the AR learning environment in higher education settings. 
    more » « less
  4. null (Ed.)
    Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create interactive AR content. We developed ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter expert’s (SMEs) environment-object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To enable smooth workflows, ProcessAR locates and identifies different tools/objects through computer vision within the workspace when the author looks at them. We explored additional features such as embedding 2D videos with detected objects and user-adaptive triggers. A final user evaluation comparing ProcessAR and a baseline AR authoring environment showed that, according to our qualitative questionnaire, users preferred ProcessAR. 
    more » « less
  5. This article discusses novel research methods used to examine how Augmented Reality (AR) can be utilized to present “omic” (i.e., genomes, microbiomes, pathogens, allergens) information to non-expert users. While existing research shows the potential of AR as a tool for personal health, methodological challenges pose a barrier to the ways in which AR research can be conducted. There is a growing need for new evaluation methods for AR systems, especially as remote testing becomes increasingly popular. In this article, we present two AR studies adapted for remote research environments in the context of personal health. The first study ( n = 355) is a non-moderated remote study conducted using an AR web application to explore the effect of layering abstracted pathogens and mitigative behaviors on a user, on perceived risk perceptions, negative affect, and behavioral intentions. This study introduces methods that address participant precursor requirements, diversity of platforms for delivering the AR intervention, unsupervised setups, and verification of participation as instructed. The second study ( n = 9) presents the design and moderated remote evaluation of a technology probe, a prototype of a novel AR tool that overlays simulated timely and actionable environmental omic data in participants' living environment, which helps users to contextualize and make sense of the data. Overall, the two studies contribute to the understanding of investigating AR as a tool for health behavior and interventions for remote, at-home, empirical studies. 
    more » « less