skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 25, 2026

Title: ARticulate: Interactive Visual Guidance for Demonstrated Rotational Degrees of Freedom in Mobile AR
Mobile Augmented Reality (AR) offers a powerful way to provide spatially-aware guidance for real-world applications. In many cases, these applications involve the configuration of a camera or articulated subject, asking users to navigate several spatial degrees of freedom (DOF) at once. Most guidance for such tasks relies on decomposing available DOF into subspaces that can be more easily mapped to simple 1D or 2D visualizations. Unfortunately, different factorizations of the same motion often map to very different visual feedback, and finding the factorization that best matches a user’s intuition can be difficult. We propose an interactive approach that infers rotational degrees of freedom from short user demonstrations. Users select one or two DOFs at a time by demonstrating a small range of motion, which we use to learn a rotational frame that best aligns with user control of the object. We show that deriving visual feedback from this inferred learned rotational frame leads to improved task completion times on 6DOF guidance tasks compared to standard default reference frames used in most mixed reality applications.  more » « less
Award ID(s):
2340448
PAR ID:
10594569
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400713941
Page Range / eLocation ID:
1 to 8
Subject(s) / Keyword(s):
Augmented Reality Computational Photography Human-Computer Interaction HCI Mobile Application
Format(s):
Medium: X
Location:
Yokohama Japan
Sponsoring Org:
National Science Foundation
More Like this
  1. Not AvailableGenerating salient and intuitively understood haptic feedback on the human finger through a non-intrusive wearable remains a challenge in haptic device development. Most existing solutions either restrict the hand and finger’s natural range of motion or impede sensory perception, quickly becoming intrusive during dexterous manipulation tasks. Here, we introduce NURing (Non-intrUsive Ring), a tendon-actuated haptic device that provides kinesthetic feedback by deflecting the finger. The NURing is easily donned and doffed, enabling on-demand kinesthetic feedback while leaving the hand and fingers free for dexterous tasks. We demonstrate that the device delivers perceptually salient feedback and evaluate its performance through a series of uniaxial motion guidance tasks. The lightweight NURing device, measuring approximately 220 g, can generate guidance cues at up to 1 Hz, enabling participants to identify target directions in under 3 s with a 1.5° steady-state error, corresponding to a fingertip deviation of less than 11mm. Additionally, it can guide users along complex, smooth trajectories with an average trajectory error of 7°. These findings highlight the effectiveness of fingertip deflection as a kinesthetic feedback modality, enabling precise guidance for real-world applications such as sightless touchscreen navigation, assistive technology, and both industrial and consumer augmented/virtual reality systems. 
    more » « less
  2. Current collaborative augmented reality (AR) systems establish a common localization coordinate frame among users by exchanging and comparing maps comprised of feature points. However, relative positioning through map sharing struggles in dynamic or feature-sparse environments. It also requires that users exchange identical regions of the map, which may not be possible if they are separated by walls or facing different directions. In this paper, we present Cappella11Like its musical inspiration, Cappella utilizes collaboration among agents to forgo the need for instrumentation, an infrastructure-free 6-degrees-of-freedom (6DOF) positioning system for multi-user AR applications that uses motion estimates and range measurements between users to establish an accurate relative coordinate system. Cappella uses visual-inertial odometry (VIO) in conjunction with ultra-wideband (UWB) ranging radios to estimate the relative position of each device in an ad hoc manner. The system leverages a collaborative particle filtering formulation that operates on sporadic messages exchanged between nearby users. Unlike visual landmark sharing approaches, this allows for collaborative AR sessions even if users do not share the same field of view, or if the environment is too dynamic for feature matching to be reliable. We show that not only is it possible to perform collaborative positioning without infrastructure or global coordinates, but that our approach provides nearly the same level of accuracy as fixed infrastructure approaches for AR teaming applications. Cappella consists of an open source UWB firmware and reference mobile phone application that can display the location of team members in real time using mobile AR. We evaluate Cappella across mul-tiple buildings under a wide variety of conditions, including a contiguous 30,000 square foot region spanning multiple floors, and find that it achieves median geometric error in 3D of less than 1 meter. 
    more » « less
  3. Abstract A “virtual mirror” is a promising interface for virtual or augmented reality applications in which users benefit from seeing themselves within the environment, such as serious games for rehabilitation exercise or biological education. While there is extensive work analyzing pointing and providing assistance for first-person perspectives, mirrored third-person perspectives have been minimally considered, limiting the quality of user interactions in current virtual mirror applications. We address this gap with two user studies aimed at understanding pointing motions with a mirror view and assessing visual cues that assist pointing. An initial two-phase preliminary study had users tune and test nine different visual aids. This was followed by in-depth testing of the best four of those visual aids compared with unaided pointing. Results give insight into both aided and unaided pointing with this mirrored third-person view, and compare visual cues. We note a pattern of consistently pointing far in front of targets when first introduced to the pointing task, but that initial unaided motion improves after practice with visual aids. We found that the presence of stereoscopy is not sufficient for enhancing accuracy, supporting the use of other visual cues that we developed. We show that users perform pointing differently when pointing behind and in front of themselves. We finally suggest which visual aids are most promising for 3D pointing in virtual mirror interfaces. 
    more » « less
  4. Augmented reality (AR) has been used to guide users in multi-step tasks, providing information about the current step (cueing) or future steps (precueing). However, existing work exploring cueing and precueing a series of rigid-body transformations requiring rotation has only examined one-degree-of-freedom (DoF) rotations alone or in conjunction with 3DoF translations. In contrast, we address sequential tasks involving 3DoF rotations and 3DoF translations. We built a testbed to compare two types of visualizations for cueing and precueing steps. In each step, a user picks up an object, rotates it in 3D while translating it in 3D, and deposits it in a target 6DoF pose. Action-based visualizations show the actions needed to carry out a step and goal-based visualizations show the desired end state of a step. We conducted a user study to evaluate these visualizations and the efficacy of precueing. Participants performed better with goal-based visualizations than with action-based visualizations, and most effectively with goal-based visualizations aligned with the Euler axis. However, only a few of our participants benefited from precues, most likely because of the cognitive load of 3D rotations. 
    more » « less
  5. Relocation of haptic feedback from the fingertips to the wrist has been considered as a way to enable haptic interaction with mixed reality virtual environments while leaving the fingers free for other tasks. We present a pair of wrist-worn tactile haptic devices and a virtual environment to study how various mappings between fingers and tactors affect task performance. The haptic feedback rendered to the wrist reflects the interaction forces occurring between a virtual object and virtual avatars controlled by the index finger and thumb. We performed a user study comparing four different finger-to-tactor haptic feedback mappings and one no-feedback condition as a control. We evaluated users' ability to perform a simple pick-and-place task via the metrics of task completion time, path length of the fingers and virtual cube, and magnitudes of normal and shear forces at the fingertips. We found that multiple mappings were effective, and there was a greater impact when visual cues were limited. We discuss the limitations of our approach and describe next steps toward multi-degree-of-freedom haptic rendering for wrist-worn devices to improve task performance in virtual environments. 
    more » « less