While augmented reality (AR) headsets provide entirely new ways of seeing and interacting with data, traditional computing devices can play a symbiotic role when used in conjunction with AR as a hybrid user interface. A promising use case for this setup is situated analytics. AR can provide embedded views that are integrated with their physical referents, and a separate device such as a tablet can provide a familiar situated overview of the entire dataset being examined. While prior work has explored similar setups, we sought to understand how people perceive and make use of visualizations presented on both embedded visualizations (in AR) and situated visualizations (on a tablet) to achieve their own goals. To this end, we conducted an exploratory study using a scenario and task familiar to most: adjusting light levels in a smart home based on personal preference and energy usage. In a prototype that simulates AR in virtual reality, embedded visualizations are positioned next to lights distributed across an apartment, and situated visualizations are provided on a handheld tablet. We observed and interviewed 19 participants using the prototype. Participants were easily able to perform the task, though the extent the visualizations were used during the task varied, with some making decisions based on the data and others only on their own preferences. Our findings also suggest the two distinct roles that situated and embedded visualizations can have, and how this clear separation might improve user satisfaction and minimize attention-switching overheads in this hybrid user interface setup. We conclude by discussing the importance of considering the user's needs, goals, and the physical environment for designing and evaluating effective situated analytics applications.
more »
« less
Asynchronously Assigning, Monitoring, and Managing Assembly Goals in Virtual Reality for High-Level Robot Teleoperation
We present a prototype virtual reality user interface for robot teleoperation that supports high-level specification of 3D object positions and orientations in remote assembly tasks. Users interact with virtual replicas of task objects. They asynchronously assign multiple goals in the form of 6DoF destination poses without needing to be familiar with specific robots and their capabilities, and manage and monitor the execution of these goals. The user interface employs two different spatiotemporal visualizations for assigned goals: one represents all goals within the user’s workspace (Aggregated View), while the other depicts each goal within a separate world in miniature (Timeline View). We conducted a user study of the interface without the robot system to compare how these visualizations affect user efficiency and task load. The results show that while the Aggregated View helped the participants finish the task faster, the participants preferred the Timeline View.
more »
« less
- Award ID(s):
- 2037101
- PAR ID:
- 10563416
- Publisher / Repository:
- IEEE Virtual Reality 2024
- Date Published:
- ISBN:
- 979-8-3503-7402-5
- Page Range / eLocation ID:
- 450 to 460
- Subject(s) / Keyword(s):
- Human-centered computing—Human computer interaction (HCI)—Interaction paradigms—Virtual reality Human- centered computing—Interaction design—Interaction design pro- cess and methods—User interface design Computer systems organization—Embedded and cyber-physical systems Robotics— External interfaces for robotics
- Format(s):
- Medium: X
- Location:
- Orlando, FL, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)As collaborative robots become increasingly widespread in manufacturing settings, there is a greater need for tools and interfaces to support operators who integrate, supervise, and troubleshoot these systems. In this paper, we present an application of the Robot Attention Demand (RAD) metric for use in the design of user interfaces to support operators in collaborative manufacturing scenarios. Building on prior work that introduced RAD, we designed and implemented prototype timeline and countdown-timer interfaces to be used within a collaborative assembly-inspection task where an operator is also responsible for a separate sorting task. We performed a user evaluation to investigate the effects of displaying predictive RAD information on operator performance and perceptions of the task. Our results show lower perceived task load and increased usability scores compared to baseline condition without an interface. These findings suggest that predictive RAD should be used by designers and engineers developing operator interfaces for collaborative robot applications in manufacturing.more » « less
-
Dynamically Interactive Visualization (DIVI) is a novel approach for orchestrating interactions within and across static visualizations. DIVI deconstructs Scalable Vector Graphics charts at runtime to infer content and coordinate user input, decoupling interaction from specification logic. This decoupling allows interactions to extend and compose freely across different tools, chart types, and analysis goals. DIVI exploits positional relations of marks to detect chart components such as axes and legends, reconstruct scales and view encodings, and infer data fields. DIVI then enumerates candidate transformations across inferred data to perform linking between views. To support dynamic interaction without prior specification, we introduce a taxonomy that formalizes the space of standard interactions by chart element, interaction type, and input event. We demonstrate DIVI's usefulness for rapid data exploration and analysis through a usability study with 13 participants and a diverse gallery of dynamically interactive visualizations, including single chart, multi-view, and cross-tool configurations.more » « less
-
Shared-gaze visualizations (SGV) allow collocated collaborators to understand each other's attention and intentions while working jointly in an augmented reality setting. However, prior work has overlooked user control and privacy over how gaze information can be shared between collaborators. In this work, we examine two methods for visualizing shared-gaze between collaborators: gaze-hover and gaze-trigger. We compare the methods with existing solutions through a paired-user evaluation study in which participants participate in a virtual assembly task. Finally, we contribute an understanding of user perceptions, preferences, and design implications of shared-gaze visualizations in augmented reality.more » « less
-
Timelines are commonly represented on a horizontal line, which is not necessarily the most effective way to visualize temporal event sequences. However, few experiments have evaluated how timeline shape influences task performance. We present the design and results of a controlled experiment run on Amazon Mechanical Turk (n=192) in which we evaluate how timeline shape affects task completion time, correctness, and user preference. We tested 12 combinations of 4 shapes --- horizontal line, vertical line, circle, and spiral — and 3 data types — recurrent, non-recurrent, and mixed event sequences. We found good evidence that timeline shape meaningfully affects user task completion time but not correctness and that users have a strong shape preference. Building on our results, we present design guidelines for creating effective timeline visualizations based on user task and data types. A free copy of this paper, the evaluation stimuli and data, and code are available https://osf.io/qr5yu/more » « less