We present a prototype virtual reality user interface for robot teleoperation that supports high-level specification of 3D object positions and orientations in remote assembly tasks. Users interact with virtual replicas of task objects. They asynchronously assign multiple goals in the form of 6DoF destination poses without needing to be familiar with specific robots and their capabilities, and manage and monitor the execution of these goals. The user interface employs two different spatiotemporal visualizations for assigned goals: one represents all goals within the user’s workspace (Aggregated View), while the other depicts each goal within a separate world in miniature (Timeline View). We conducted a user study of the interface without the robot system to compare how these visualizations affect user efficiency and task load. The results show that while the Aggregated View helped the participants finish the task faster, the participants preferred the Timeline View.
more »
« less
Lights, Headset, Tablet, Action: Exploring the Use of Hybrid User Interfaces for Immersive Situated Analytics
While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications.
more »
« less
- Award ID(s):
- 2238313
- PAR ID:
- 10590076
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 8
- Issue:
- ISS
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 517 to 539
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Personal visualizations present a separate class of visualizations where users interact with their own data to draw inferences about themselves. In this paper, we study how a realistic understanding of personal visualizations can be gained from analyzing user interactions. We designed an interface presenting visualizations of the personal data gathered in a prior study and logged interactions from 369 participants as they each explored their own data. We found that the participants spent different amounts of time in exploring their data and used a variety of physical devices which could have affected their engagement with the visualizations. Our findings also suggest that the participants made more comparisons between their data instances than with the provided baselines and certain interface design choices, such as the ordering of options, influenced their exploratory behaviors.more » « less
-
Virtual content instability caused by device pose tracking error remains a prevalent issue in markerless augmented reality (AR), especially on smartphones and tablets. However, when examining environments which will host AR experiences, it is challenging to determine where those instability artifacts will occur; we rarely have access to ground truth pose to measure pose error, and even if pose error is available, traditional visualizations do not connect that data with the real environment, limiting their usefulness. To address these issues we present SiTAR (Situated Trajectory Analysis for Augmented Reality), the first situated trajectory analysis system for AR that incorporates estimates of pose tracking error. We start by developing the first uncertainty-based pose error estimation method for visual-inertial simultaneous localization and mapping (VI-SLAM), which allows us to obtain pose error estimates without ground truth; we achieve an average accuracy of up to 96.1% and an average FI score of up to 0.77 in our evaluations on four VI-SLAM datasets. Next, we present our SiTAR system, implemented for ARCore devices, combining a backend that supplies uncertainty-based pose error estimates with a frontend that generates situated trajectory visualizations. Finally, we evaluate the efficacy of SiTAR in realistic conditions by testing three visualization techniques in an in-the-wild study with 15 users and 13 diverse environments; this study reveals the impact both environment scale and the properties of surfaces present can have on user experience and task performance.more » « less
-
The use of augmented reality (AR) with semi-autonomous aerial systems in civil infrastructure inspection offers an extension of human capabilities by enhancing their ability to access hard-to-reach areas, decreasing the physical requirements needed to complete the task, and augmenting their visual field of view with useful information. Still unknown though is how helpful AR visual aids may be when they are imperfect and provide the user with erroneous data. A total of 28 participants flew as an autonomous drone around a simulated bridge in a virtual reality environment and participated in a target detection task. In this study, we analyze the effect of AR cue type across discrete levels of target saliency by measuring performance in a signal detection task. Results showed significant differences in false alarm rates in the different target salience conditions but no significant differences across AR cue types (none, bounding box, corner-bound box, and outline) in terms of hits and misses.more » « less
-
We present V.Ra, a visual and spatial programming system for robot-IoT task authoring. In V.Ra, programmable mobile robots serve as binding agents to link the stationary IoTs and perform collaborative tasks. We establish an ecosystem that coherently connects the three key elements of robot task planning , the human, robot and IoT, with one single mobile AR device. Users can perform task authoring with the Augmented Reality (AR) handheld interface, then placing the AR device onto the mobile robot directly transfers the task plan in a what-you-do-is-what-robot-does (WYDWRD) manner. The mobile device mediates the interactions between the user, robot, and the IoT oriented tasks, and guides the path planning execution with the embedded simultaneous localization and mapping (SLAM) capability. We demonstrate that V.Ra enables instant, robust and intuitive room-scale navigatory and interactive task authoring through various use cases and preliminary studies.more » « less
An official website of the United States government

