skip to main content


Title: CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications
Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR, an in-situ programming tool that supports users to rapidly author context-aware applications by referring to their previous activities. We customize an AR head-mounted device with multiple camera systems that allow for non-intrusive capturing of user's daily activities. During authoring, we reconstruct the captured data in AR with an animated avatar and use virtual icons to represent the surrounding environment. With our visual programming interface, users create human-centered rules for the applications and experience them instantly in AR. We further demonstrate four use cases enabled by CAPturAR. Also, we verify the effectiveness of the AR-HMD and the authoring workflow with a system evaluation using our prototype. Moreover, we conduct a remote user study in an AR simulator to evaluate the usability.  more » « less
Award ID(s):
1839971
NSF-PAR ID:
10297584
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ACM Symposium on User Interface Software and Technology (UIST '20)
Page Range / eLocation ID:
328 to 341
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Freehand gesture is an essential input modality for modern Augmented Reality (AR) user experiences. However, developing AR applications with customized hand interactions remains a challenge for end-users. Therefore, we propose GesturAR, an end-to-end authoring tool that supports users to create in-situ freehand AR applications through embodied demonstration and visual programming. During authoring, users can intuitively demonstrate the customized gesture inputs while referring to the spatial and temporal context. Based on the taxonomy of gestures in AR, we proposed a hand interaction model which maps the gesture inputs to the reactions of the AR contents. Thus, users can author comprehensive freehand applications using trigger-action visual programming and instantly experience the results in AR. Further, we demonstrate multiple application scenarios enabled by GesturAR, such as interactive virtual objects, robots, and avatars, room-level interactive AR spaces, embodied AR presentations, etc. Finally, we evaluate the performance and usability of GesturAR through a user study. 
    more » « less
  2. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning , the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies. 
    more » « less
  3. We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user’s authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study. 
    more » « less
  4. We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning (human-robot-IoT) with one single AR-SLAM device. Users can perform task authoring in an analogous manner with the Augmented Reality (AR) interface. Then placing the device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot and IoT oriented tasks, and guides the path planning execution with the SLAM capability. 
    more » « less
  5. null (Ed.)
    Augmented reality (AR) applications are growing in popularity in educational settings. While the effects of AR experiences on learning have been widely studied, there is relatively less research on understanding the impact of AR on the dynamics of co-located collaborative learning, specifically in the context of novices programming robots. Educational robotics are a powerful learning context because they engage students with problem solving, critical thinking, STEM (Science, Technology, Engineering, Mathematics) concepts, and collaboration skills. However, such collaborations can suffer due to students having unequal access to resources or dominant peers. In this research we investigate how augmented reality impacts learning and collaboration while peers engage in robot programming activities. We use a mixed methods approach to measure how participants are learning, manipulating resources, and engaging in problem solving activities with peers. We investigate how these behaviors are impacted by the presence of augmented reality visualizations, and by participants? proximity to resources. We find that augmented reality improved overall group learning and collaboration. Detailed analysis shows that AR strongly helps one participant more than the other, by improving their ability to learn and contribute while remaining engaged with the robot. Furthermore, augmented reality helps both participants maintain a common ground and balance contributions during problem solving activities. We discuss the implications of these results for designing AR and non-AR collaborative interfaces. 
    more » « less