skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Pointing in the third-person: an exploration of human motion and visual pointing aids for 3D virtual mirrors
Abstract A “virtual mirror” is a promising interface for virtual or augmented reality applications in which users benefit from seeing themselves within the environment, such as serious games for rehabilitation exercise or biological education. While there is extensive work analyzing pointing and providing assistance for first-person perspectives, mirrored third-person perspectives have been minimally considered, limiting the quality of user interactions in current virtual mirror applications. We address this gap with two user studies aimed at understanding pointing motions with a mirror view and assessing visual cues that assist pointing. An initial two-phase preliminary study had users tune and test nine different visual aids. This was followed by in-depth testing of the best four of those visual aids compared with unaided pointing. Results give insight into both aided and unaided pointing with this mirrored third-person view, and compare visual cues. We note a pattern of consistently pointing far in front of targets when first introduced to the pointing task, but that initial unaided motion improves after practice with visual aids. We found that the presence of stereoscopy is not sufficient for enhancing accuracy, supporting the use of other visual cues that we developed. We show that users perform pointing differently when pointing behind and in front of themselves. We finally suggest which visual aids are most promising for 3D pointing in virtual mirror interfaces.  more » « less
Award ID(s):
1815976
PAR ID:
10406657
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Virtual Reality
Volume:
27
Issue:
3
ISSN:
1359-4338
Page Range / eLocation ID:
p. 2099-2116
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Efthimiou, E.; Fotinea, S-E.; Hanke, T.; McDonald, J.; Shterionov, D.; Wolfe, R. (Ed.)
    With improved and more easily accessible technology, immersive virtual reality (VR) head-mounted devices have become more ubiquitous. As signing avatar technology improves, virtual reality presents a new and relatively unexplored application for signing avatars. This paper discusses two primary ways that signed language can be represented in immersive virtual spaces: 1) Third-person, in which the VR user sees a character who communicates in signed language; and 2) First-person, in which the VR user produces signed content themselves, tracked by the head-mounted device and visible to the user herself (and/or to other users) in the virtual environment. We will discuss the unique affordances granted by virtual reality and how signing avatars might bring accessibility and new opportunities to virtual spaces. We will then discuss the limitations of signed content in virtual reality concerning virtual signers shown from both third- and first-person perspectives. 
    more » « less
  2. In a future of pervasive augmented reality (AR), AR systems will need to be able to efficiently draw or guide the attention of the user to visual points of interest in their physical-virtual environment. Since AR imagery is overlaid on top of the user's view of their physical environment, these attention guidance techniques must not only compete with other virtual imagery, but also with distracting or attention-grabbing features in the user's physical environment. Because of the wide range of physical-virtual environments that pervasive AR users will find themselves in, it is difficult to design visual cues that “pop out” to the user without performing a visual analysis of the user's environment, and changing the appearance of the cue to stand out from its surroundings. In this paper, we present an initial investigation into the potential uses of dichoptic visual cues for optical see-through AR displays, specifically cues that involve having a difference in hue, saturation, or value between the user's eyes. These types of cues have been shown to be preattentively processed by the user when presented on other stereoscopic displays, and may also be an effective method of drawing user attention on optical see-through AR displays. We present two user studies: one that evaluates the saliency of dichoptic visual cues on optical see-through displays, and one that evaluates their subjective qualities. Our results suggest that hue-based dichoptic cues or “Forbidden Colors” may be particularly effective for these purposes, achieving significantly lower error rates in a pop out task compared to value-based and saturation-based cues. 
    more » « less
  3. In this paper, we report on initial work exploring the potential value of technology-mediated cues and signals to improve cross-reality interruptions. We investigated the use of color-coded visual cues (LED lights) to help a person decide when to interrupt a virtual reality (VR) user, and a gesture-based mechanism (waving at the user) to signal their desire to do so. To assess the potential value of these mechanisms we conducted a preliminary 2×3 within-subjects experimental design user study (N=10) where the participants acted in the role of the interrupter. While we found that our visual cues improved participants' experiences, our gesture-based signaling mechanism did not, as users did not trust it nor consider it as intuitive as a speech-based mechanism might be. Our preliminary findings motivate further investigation of interruption cues and signaling mechanisms to inform future VR head-worn display system designs. 
    more » « less
  4. Abstract We present an experimental investigation of spatial audio feedback using smartphones to support direction localization in pointing tasks for people with visual impairments (PVIs). We do this using a mobile game based on a bow-and-arrow metaphor. Our game provides a combination of spatial and non-spatial (sound beacon) audio to help the user locate the direction of the target. Our experiments with sighted, sighted-blindfolded, and visually impaired users shows that (a) the efficacy of spatial audio is relatively higher for PVIs than for blindfolded sighted users during the initial reaction time for direction localization, (b) the general behavior between PVIs and blind-folded individuals is statistically similar, and (c) the lack of spatial audio significantly reduces the localization performance even in sighted blind-folded users. Based on our findings, we discuss the system and interaction design implications for making future mobile-based spatial interactions accessible to PVIs. 
    more » « less
  5. null (Ed.)
    Third-person is a popular perspective for video games, but virtual reality (VR) seems to be primarily experienced from a first-person point of view (POV). While a first-person POV generally offers the highest presence; a third-person POV allows users to see their avatar; which allows for a better bond, and the higher vantage point generally increases spatial awareness and navigation. Third-person locomotion is generally implemented using a controller or keyboard, with users often sitting down; an approach that is considered to offer a low presence and embodiment. We present a novel thirdperson locomotion method that enables a high avatar embodiment by integrating skeletal tracking with head-tilt based input to enable omnidirectional navigation beyond the confines of available tracking space. By interpreting movement relative to an avatar, the user will always keep facing the camera which optimizes skeletal tracking and keeps required instrumentation minimal (1 depth camera). A user study compares the performance, usability, VR sickness incidence and avatar embodiment of our method to using a controller for a navigation task that involves interacting with objects. Though a controller offers a higher performance and usability, our locomotion method offered a significantly higher avatar embodiment. 
    more » « less