skip to main content


Title: Evaluating the Impact of Limited Physical Space on the Navigation Performance of Two Locomotion Methods in Immersive Virtual Environments
Consumer level virtual experiences almost always occur when physical space is limited, either by the constraints of an indoor space or of a tracked area. This observation coupled with the need for movement through large virtual spaces has resulted in a proliferation of research into locomotion interfaces that decouples movement through the virtual environment from movement in the real world. While many locomotion interfaces support movement of some kind in the real world, some do not. This paper examines the effect of the amount of physical space used in the real world on one popular locomotion interface, resetting, when compared to a locomotion interface that requires minimal physical space, walking in place. The metric used to compare the two locomotion interfaces was navigation performance, specifically, the acquisition of survey knowledge. We find that, while there are trade-offs between the two methods, walking in place is preferable in small spaces.  more » « less
Award ID(s):
1763966
NSF-PAR ID:
10404234
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2022 IEEE Virtual Reality and 3D User Interfaces (VR)
Page Range / eLocation ID:
821 to 831
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Virtual environments (VEs) can be infinitely large, but movement of the virtual reality (VR) user is constrained by the surrounding real environment. Teleporting has become a popular locomotion interface to allow complete exploration of the VE. To teleport, the user selects the intended position (and sometimes orientation) before being instantly transported to that location. However, locomotion interfaces such as teleporting can cause disorientation. This experiment explored whether practice and feedback when using the teleporting interface can reduce disorientation. VR headset owners participated remotely. On each trial of a triangle completion task, the participant traveled along two path legs through a VE before attempting to point to the path origin. Travel was completed with one of two teleporting interfaces that differed in the availability of rotational self-motion cues. Participants in the feedback condition received feedback about their pointing accuracy. For both teleporting interfaces tested, feedback caused significant improvement in pointing performance, and practice alone caused only marginal improvement. These results suggest that disorientation in VR can be reduced through feedback-based training. 
    more » « less
  2. Spatial perspective taking is an essential cognitive ability that enables people to imagine how an object or scene would appear from a perspective different from their current physical viewpoint. This process is fundamental for successful navigation, especially when people utilize navigational aids (e.g., maps) and the information provided is shown from a different perspective. Research on spatial perspective taking is primarily conducted using paper-pencil tasks or computerized figural tasks. However, in daily life, navigation takes place in a three-dimensional (3D) space and involves movement of human bodies through space, and people need to map the perspective indicated by a 2D, top down, external representation to their current 3D surroundings to guide their movements to goal locations. In this study, we developed an immersive viewpoint transformation task (iVTT) using ambulatory virtual reality (VR) technology. In the iVTT, people physically walked to a goal location in a virtual environment, using a first-person perspective, after viewing a map of the same environment from a top-down perspective. Comparing this task with a computerized version of a popular paper-and-pencil perspective taking task (SOT: Spatial Orientation Task), the results indicated that the SOT is highly correlated with angle production error but not distance error in the iVTT. Overall angular error in the iVTT was higher than in the SOT. People utilized intrinsic body axes (front/back axis or left/right axis) similarly in the SOT and the iVTT, although there were some minor differences. These results suggest that the SOT and the iVTT capture common variance and cognitive processes, but are also subject to unique sources of error caused by different cognitive processes. The iVTT provides a new immersive VR paradigm to study perspective taking ability in a space encompassing human bodies, and advances our understanding of perspective taking in the real world. 
    more » « less
  3. Impossible spaces have been used to increase the amount of virtual space available for real walking within a constrained physical space. In this technique, multiple virtual rooms are allowed to occupy overlapping portions of the physical space, in a way which is not possible in real euclidean space. Prior work has explored detection thresholds for impossible spaces, however very little work has considered other aspects of how impossible spaces alter participants' perception of spatial relationships within virtual environments. In this paper, we present a within-subjects study $(n=30)$ investigating how impossible spaces altered participants perceptions of the location of objects placed in different rooms. Participants explored three layouts with varying amounts of overlap between rooms and then pointed in the direction of various objects they had been tasked to locate. Significantly more error was observed when pointing at objects in overlapping spaces as compared to the non-overlapping layout. Further analysis suggests that participants pointed towards where objects would be located in the non-overlapping layout, regardless of how much overlap was present. This suggests that, when participants are not aware that any manipulation is present, they automatically adapt their representation of the spaces based on judgments of relative size and visible constraints on the size of the whole system. 
    more » « less
  4.  
    more » « less
  5. Gonzalez, D. (Ed.)

    Today’s research on human-robot teaming requires the ability to test artificial intelligence (AI) algorithms for perception and decision-making in complex real-world environments. Field experiments, also referred to as experiments “in the wild,” do not provide the level of detailed ground truth necessary for thorough performance comparisons and validation. Experiments on pre-recorded real-world data sets are also significantly limited in their usefulness because they do not allow researchers to test the effectiveness of active robot perception and control or decision strategies in the loop. Additionally, research on large human-robot teams requires tests and experiments that are too costly even for the industry and may result in considerable time losses when experiments go awry. The novel Real-Time Human Autonomous Systems Collaborations (RealTHASC) facility at Cornell University interfaces real and virtual robots and humans with photorealistic simulated environments by implementing new concepts for the seamless integration of wearable sensors, motion capture, physics-based simulations, robot hardware and virtual reality (VR). The result is an extended reality (XR) testbed by which real robots and humans in the laboratory are able to experience virtual worlds, inclusive of virtual agents, through real-time visual feedback and interaction. VR body tracking by DeepMotion is employed in conjunction with the OptiTrack motion capture system to transfer every human subject and robot in the real physical laboratory space into a synthetic virtual environment, thereby constructing corresponding human/robot avatars that not only mimic the behaviors of the real agents but also experience the virtual world through virtual sensors and transmit the sensor data back to the real human/robot agent, all in real time. New cross-domain synthetic environments are created in RealTHASC using Unreal Engine™, bridging the simulation-to-reality gap and allowing for the inclusion of underwater/ground/aerial autonomous vehicles, each equipped with a multi-modal sensor suite. The experimental capabilities offered by RealTHASC are demonstrated through three case studies showcasing mixed real/virtual human/robot interactions in diverse domains, leveraging and complementing the benefits of experimentation in simulation and in the real world.

     
    more » « less