skip to main content


Title: Rotational Self-motion Cues Improve Spatial Learning when Teleporting in Virtual Environments
Teleporting interfaces are widely used in virtual reality applications to explore large virtual environments. When teleporting, the user indicates the intended location in the virtual environment and is instantly transported, typically without self-motion cues. This project explored the cost of teleporting on the acquisition of survey knowledge (i.e., a ”cognitive map”). Two teleporting interfaces were compared, one with and one without visual and body-based rotational self-motion cues. Both interfaces lacked translational self-motion cues. Participants used one of the two teleporting interfaces to find and study the locations of six objects scattered throughout a large virtual environment. After learning, participants completed two measures of cognitive map fidelity: an object-to-object pointing task and a map drawing task. The results indicate superior spatial learning when rotational self-motion cues were available. Therefore, virtual reality developers should strongly consider the benefits of rotational self-motion cues when creating and choosing locomotion interfaces.  more » « less
Award ID(s):
1816029
NSF-PAR ID:
10206091
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Symposium on Spatial User Interaction
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Virtual reality users are susceptible to disorientation, particularly when using locomotion interfaces that lack self-motion cues. Environmental cues, such as boundaries defined by walls or a fence, provide information to help the user remain oriented. This experiment evaluated whether the type of boundary impacts its usefulness for staying oriented. Participants wore a head-mounted display and performed a triangle completion task in virtual reality by traveling two outbound path segments before attempting to point to the path origin. The task was completed with two teleporting interfaces differing in the availability of rotational self-motion cues, and within five virtual environments differing in the availability and type of boundaries. Pointing errors were highest in an open field without environmental cues, and lowest in a classroom with walls and landmarks. Environments with a single square boundary defined by a fence, drop-off, or floor texture discontinuity led to errors in between the open field and the classroom. Performance with the floor texture discontinuity was similar to that with navigational barriers (i.e., fence and drop-off), indicating that an effective barrier need not be a navigational impediment. These results inform spatial cognitive theory about boundary-based navigation and inform application by specifying the types of environmental and self-motion cues that designers of virtual environments should include to reduce disorientation in virtual reality. 
    more » « less
  2. Virtual environments (VEs) can be infinitely large, but movement of the virtual reality (VR) user is constrained by the surrounding real environment. Teleporting has become a popular locomotion interface to allow complete exploration of the VE. To teleport, the user selects the intended position (and sometimes orientation) before being instantly transported to that location. However, locomotion interfaces such as teleporting can cause disorientation. This experiment explored whether practice and feedback when using the teleporting interface can reduce disorientation. VR headset owners participated remotely. On each trial of a triangle completion task, the participant traveled along two path legs through a VE before attempting to point to the path origin. Travel was completed with one of two teleporting interfaces that differed in the availability of rotational self-motion cues. Participants in the feedback condition received feedback about their pointing accuracy. For both teleporting interfaces tested, feedback caused significant improvement in pointing performance, and practice alone caused only marginal improvement. These results suggest that disorientation in VR can be reduced through feedback-based training. 
    more » « less
  3. Background: Drivers gather most of the information they need to drive by looking at the world around them and at visual displays within the vehicle. Navigation systems automate the way drivers navigate. In using these systems, drivers offload both tactical (route following) and strategic aspects (route planning) of navigational tasks to the automated SatNav system, freeing up cognitive and attentional resources that can be used in other tasks (Burnett, 2009). Despite the potential benefits and opportunities that navigation systems provide, their use can also be problematic. For example, research suggests that drivers using SatNav do not develop as much environmental spatial knowledge as drivers using paper maps (Waters & Winter, 2011; Parush, Ahuvia, & Erev, 2007). With recent growth and advances of augmented reality (AR) head-up displays (HUDs), there are new opportunities to display navigation information directly within a driver’s forward field of view, allowing them to gather information needed to navigate without looking away from the road. While the technology is promising, the nuances of interface design and its impacts on drivers must be further understood before AR can be widely and safely incorporated into vehicles. Specifically, an impact that warrants investigation is the role of AR HUDS in spatial knowledge acquisition while driving. Acquiring high levels of spatial knowledge is crucial for navigation tasks because individuals who have greater levels of spatial knowledge acquisition are more capable of navigating based on their own internal knowledge (Bolton, Burnett, & Large, 2015). Moreover, the ability to develop an accurate and comprehensive cognitive map acts as a social function in which individuals are able to navigate for others, provide verbal directions and sketch direction maps (Hill, 1987). Given these points, the relationship between spatial knowledge acquisition and novel technologies such as AR HUDs in driving is a relevant topic for investigation. Objectives: This work explored whether providing conformal AR navigational cues improves spatial knowledge acquisition (as compared to traditional HUD visual cues) to assess the plausibility and justification for investment in generating larger FOV AR HUDs with potentially multiple focal planes. Methods: This study employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium fidelity driving simulator for where participants drove while navigating with one of two possible HUD interface designs: a world-relative arrow post sign and a screen-relative traditional arrow. During the 10-15 minute drive, participants drove the route and were encouraged to verbally share feedback as they proceeded. After the drive, participants completed a NASA-TLX questionnaire to record their perceived workload. We measured spatial knowledge at two levels: landmark and route knowledge. Landmark knowledge was assessed using an iconic recognition task, while route knowledge was assessed using a scene ordering task. After completion of the study, individuals signed a post-trial consent form and were compensated $10 for their time. Results: NASA-TLX performance subscale ratings revealed that participants felt that they performed better during the world-relative condition but at a higher rate of perceived workload. However, in terms of perceived workload, results suggest there is no significant difference between interface design conditions. Landmark knowledge results suggest that the mean number of remembered scenes among both conditions is statistically similar, indicating participants using both interface designs remembered the same proportion of on-route scenes. Deviance analysis show that only maneuver direction had an influence on landmark knowledge testing performance. Route knowledge results suggest that the proportion of scenes on-route which were correctly sequenced by participants is similar under both conditions. Finally, participants exhibited poorer performance in the route knowledge task as compared to landmark knowledge task (independent of HUD interface design). Conclusions: This study described a driving simulator study which evaluated the head-up provision of two types of AR navigation interface designs. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. Overall results of this initial study provide evidence that the use of both screen-relative and world-relative AR head-up display interfaces have similar impact on spatial knowledge acquisition and perceived workload while driving. These results contrast a common perspective in the AR community that conformal, world-relative graphics are inherently more effective. This study instead suggests that simple, screen-fixed designs may indeed be effective in certain contexts. 
    more » « less
  4. Context: An Optimizing Performance through Intrinsic Motivation and Attention for Learning theory-based motor learning intervention delivering autonomy support and enhanced expectancies (EE) shows promise for reducing cognitive-motor dual-task costs, or the relative difference in primary task performance when completed with and without a secondary cognitive task, that facilitate adaptive injury-resistant movement response. The current pilot study sought to determine the effectiveness of an autonomy support versus an EE-enhanced virtual reality motor learning intervention to reduce dual-task costs during single-leg balance. Design: Within-subjects 3 × 3 trial. Methods: Twenty-one male and 24 female participants, between the ages of 18 and 30 years, with no history of concussion, vertigo, lower-extremity surgery, or lower-extremity injuries the previous 6 months, were recruited for training sessions on consecutive days. Training consisted of 5 × 8 single-leg squats on each leg, during which all participants mimicked an avatar through virtual reality goggles. The autonomy support group chose an avatar color, and the EE group received positive kinematic biofeedback. Baseline, immediate, and delayed retention testing consisted of single-leg balancing under single- and dual-task conditions. Mixed-model analysis of variances compared dual-task costs for center of pressure velocity and SD between groups on each limb. Results: On the right side, dual-task costs for anterior–posterior center of pressure mean and SD were reduced in the EE group (mean Δ = −51.40, Cohen d  = 0.80 and SD Δ = −66.00%, Cohen d  = 0.88) compared with the control group (mean Δ = −22.09, Cohen d  = 0.33 and SD Δ = −36.10%, Cohen d  = 0.68) from baseline to immediate retention. Conclusions: These findings indicate that EE strategies that can be easily implemented in a clinic or sport setting may be superior to task-irrelevant AS approaches for influencing injury-resistant movement adaptations. 
    more » « less
  5. An accurate understanding of omnidirectional environment lighting is crucial for high-quality virtual object rendering in mobile augmented reality (AR). In particular, to support reflective rendering, existing methods have leveraged deep learning models to estimate or have used physical light probes to capture physical lighting, typically represented in the form of an environment map. However, these methods often fail to provide visually coherent details or require additional setups. For example, the commercial framework ARKit uses a convolutional neural network that can generate realistic environment maps; however the corresponding reflective rendering might not match the physical environments. In this work, we present the design and implementation of a lighting reconstruction framework called LITAR that enables realistic and visually-coherent rendering. LITAR addresses several challenges of supporting lighting information for mobile AR. First, to address the spatial variance problem, LITAR uses two-field lighting reconstruction to divide the lighting reconstruction task into the spatial variance-aware near-field reconstruction and the directional-aware far-field reconstruction. The corresponding environment map allows reflective rendering with correct color tones. Second, LITAR uses two noise-tolerant data capturing policies to ensure data quality, namely guided bootstrapped movement and motion-based automatic capturing. Third, to handle the mismatch between the mobile computation capability and the high computation requirement of lighting reconstruction, LITAR employs two novel real-time environment map rendering techniques called multi-resolution projection and anchor extrapolation. These two techniques effectively remove the need of time-consuming mesh reconstruction while maintaining visual quality. Lastly, LITAR provides several knobs to facilitate mobile AR application developers making quality and performance trade-offs in lighting reconstruction. We evaluated the performance of LITAR using a small-scale testbed experiment and a controlled simulation. Our testbed-based evaluation shows that LITAR achieves more visually coherent rendering effects than ARKit. Our design of multi-resolution projection significantly reduces the time of point cloud projection from about 3 seconds to 14.6 milliseconds. Our simulation shows that LITAR, on average, achieves up to 44.1% higher PSNR value than a recent work Xihe on two complex objects with physically-based materials. 
    more » « less