Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available December 1, 2026
-
Virtual reality (VR) systems for guiding remote physical tasks typically require object poses to be specified in absolute world coordinates. However, many of these tasks only need object poses to be specified relative to each other. Thus, supporting only absolute pose specification can create inefficiencies in giving or following task guidance when unnecessary constraints are imposed. We are developing a VR task-guidance system that avoids this by enabling relative 6DoF poses to be specified within subsets of objects. We describe our user interface, including how geometric relationships are specified and several ways in which they are visualized, and our plans for validating our approach against existing techniques.more » « lessFree, publicly-accessible full text available October 8, 2026
-
Free, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available September 1, 2026
-
When collaborating relative to a shared 3D virtual object in mixed reality (MR), users may experience communication issues arising from differences in perspective. These issues include occlusion (e.g., one user not being able to see what the other is referring to) and inefficient spatial references (e.g., “to the left of this” may be confusing when users are positioned opposite to each other). This paper presents a novel technique for automatic perspective alignment in collaborative MR involving co-located interaction centered around a shared virtual object. To align one user’s perspective on the object with a collaborator’s, a local copy of the object and any other virtual elements that reference it (e.g., the collaborator’s hands) are dynamically transformed. The technique does not require virtual travel and preserves face-to-face interaction. We created a prototype application to demonstrate our technique and present an evaluation methodology for related MR collaboration and perspective alignment scenarios.more » « lessFree, publicly-accessible full text available April 25, 2026
-
Copies (proxies) of objects are useful for selecting and manipulating objects in virtual reality (VR). Temporary proxies are destroyed after use and must be recreated for reuse. Permanent proxies persist after use for easy reselection, but can cause clutter. To investigate the benefits and drawbacks of permanent and temporary proxies, we conducted a user study in which participants performed 6DoF tasks with proxies in the Voodoo Dolls technique, revealing that permanent proxies were more efficient for hard reselection and preferred.more » « lessFree, publicly-accessible full text available March 8, 2026
-
Entity–Component–System (ECS) architectures are fundamental to many systems for developing extended reality (XR) applications. These applications often contain complex scenes and require intricately connected application logic to connect components together, making debugging and analysis difficult. Graph-based tools have been created to show actions in ECS-based scene hierarchies, but few address interactions that go beyond traditional hierarchical communication. To address this, we present an XR GUI for Mercury (a toolkit to handle cross-component ECS communication) that allows developers to view and edit relationships and interactions between scene entities in Mercury.more » « less
-
We present a prototype virtual reality user interface for robot teleoperation that supports high-level specification of 3D object positions and orientations in remote assembly tasks. Users interact with virtual replicas of task objects. They asynchronously assign multiple goals in the form of 6DoF destination poses without needing to be familiar with specific robots and their capabilities, and manage and monitor the execution of these goals. The user interface employs two different spatiotemporal visualizations for assigned goals: one represents all goals within the user’s workspace (Aggregated View), while the other depicts each goal within a separate world in miniature (Timeline View). We conducted a user study of the interface without the robot system to compare how these visualizations affect user efficiency and task load. The results show that while the Aggregated View helped the participants finish the task faster, the participants preferred the Timeline View.more » « less
An official website of the United States government

Full Text Available