skip to main content


Title: Embodied Third-Person Virtual Locomotion using a Single Depth Camera
Third-person is a popular perspective for video games, but virtual reality (VR) seems to be primarily experienced from a first-person point of view (POV). While a first-person POV generally offers the highest presence; a third-person POV allows users to see their avatar; which allows for a better bond, and the higher vantage point generally increases spatial awareness and navigation. Third-person locomotion is generally implemented using a controller or keyboard, with users often sitting down; an approach that is considered to offer a low presence and embodiment. We present a novel thirdperson locomotion method that enables a high avatar embodiment by integrating skeletal tracking with head-tilt based input to enable omnidirectional navigation beyond the confines of available tracking space. By interpreting movement relative to an avatar, the user will always keep facing the camera which optimizes skeletal tracking and keeps required instrumentation minimal (1 depth camera). A user study compares the performance, usability, VR sickness incidence and avatar embodiment of our method to using a controller for a navigation task that involves interacting with objects. Though a controller offers a higher performance and usability, our locomotion method offered a significantly higher avatar embodiment.  more » « less
Award ID(s):
1911041
NSF-PAR ID:
10293921
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of Graphics Interface 2021
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Problem-solving focuses on defining and analyzing problems, then finding viable solutions through an iterative process that requires brainstorming and understanding of what is known and what is unknown in the problem space. With rapid changes of economic landscape in the United States, new types of jobs emerge when new industries are created. Employers report that problem-solving is the most important skill they are looking for in job applicants. However, there are major concerns about the lack of problem-solving skills in engineering students. This lack of problem-solving skills calls for an approach to measure and enhance these skills. In this research, we propose to understand and improve problem-solving skills in engineering education by integrating eye-tracking sensing with virtual reality (VR) manufacturing. First, we simulate a manufacturing system in a VR game environment that we call a VR learning factory. The VR learning factory is built in the Unity game engine with the HTC Vive VR system for navigation and motion tracking. The headset is custom-fitted with Tobii eye-tracking technology, allowing the system to identify the coordinates and objects that a user is looking at, at any given time during the simulation. In the environment, engineering students can see through the headset a virtual manufacturing environment composed of a series of workstations and are able to interact with workpieces in the virtual environment. For example, a student can pick up virtual plastic bricks and assemble them together using the wireless controller in hand. Second, engineering students are asked to design and assemble car toys that satisfy predefined customer requirements while minimizing the total cost of production. Third, data-driven models are developed to analyze eye-movement patterns of engineering students. For instance, problem-solving skills are measured by the extent to which the eye-movement patterns of engineering students are similar to the pattern of a subject matter expert (SME), an ideal person who sets the expert criterion for the car toy assembly process. Benchmark experiments are conducted with a comprehensive measure of performance metrics such as cycle time, the number of station switches, weight, price, and quality of car toys. Experimental results show that eye-tracking modeling is efficient and effective to measure problem-solving skills of engineering students. The proposed VR learning factory was integrated into undergraduate manufacturing courses to enhance student learning and problem-solving skills. 
    more » « less
  2. Efthimiou, E. ; Fotinea, S-E. ; Hanke, T. ; McDonald, J. ; Shterionov, D. ; Wolfe, R. (Ed.)
    With improved and more easily accessible technology, immersive virtual reality (VR) head-mounted devices have become more ubiquitous. As signing avatar technology improves, virtual reality presents a new and relatively unexplored application for signing avatars. This paper discusses two primary ways that signed language can be represented in immersive virtual spaces: 1) Third-person, in which the VR user sees a character who communicates in signed language; and 2) First-person, in which the VR user produces signed content themselves, tracked by the head-mounted device and visible to the user herself (and/or to other users) in the virtual environment. We will discuss the unique affordances granted by virtual reality and how signing avatars might bring accessibility and new opportunities to virtual spaces. We will then discuss the limitations of signed content in virtual reality concerning virtual signers shown from both third- and first-person perspectives. 
    more » « less
  3. We present and evaluate methods to redirect desktop inputs such as eye gaze and mouse pointing to a VR-embedded avatar. We use these methods to build a novel interface that allows a desktop user to give presentations in remote VR meetings such as conferences or classrooms. Recent work on such VR meetings suggests a substantial number of users continue to use desktop interfaces due to ergonomic or technical factors. Our approach enables desk-top and immersed users to better share virtual worlds, by allowing desktop-based users to have more engaging or present "cross-reality" avatars. The described redirection methods consider mouse pointing and drawing for a presentation, eye-tracked gaze towards audience members, hand tracking for gesturing, and associated avatar motions such as head and torso movement. A study compared different levels of desktop avatar control and headset-based control. Study results suggest that users consider the enhanced desktop avatar to be human-like and lively and draw more attention than a conventionally animated desktop avatar, implying that our interface and methods could be useful for future cross-reality remote learning tools. 
    more » « less
  4. Location-based or Out-of-Home Entertainment refers to experiences such as theme and amusement parks, laser tag and paintball arenas, roller and ice skating rinks, zoos and aquariums, or science centers and museums among many other family entertainment and cultural venues. More recently, location-based VR has emerged as a new category of out-of-home entertainment. These VR experiences can be likened to social entertainment options such as laser tag, where physical movement is an inherent part of the experience versus at-home VR experiences where physical movement often needs to be replaced by artificial locomotion techniques due to tracking space constraints. In this work, we present the first VR study to understand the impact of natural walking in a large physical space on presence and user preference. We compare it with teleportation in the same large space, since teleportation is the most commonly used locomotion technique for consumer, at-home VR. Our results show that walking was overwhelmingly preferred by the participants and teleportation leads to significantly higher self-reported simulator sickness. The data also shows a trend towards higher self-reported presence for natural walking. 
    more » « less
  5. null (Ed.)
    This paper describes the interface and testing of an indoor navigation app - ASSIST - that guides blind & visually impaired (BVI) individuals through an indoor environment with high accuracy while augmenting their understanding of the surrounding environment. ASSIST features personalized interfaces by considering the unique experiences that BVI individuals have in indoor wayfinding and offers multiple levels of multimodal feedback. After an overview of the technical approach and implementation of the first prototype of the ASSIST system, the results of two pilot studies performed with BVI individuals are presented – a performance study to collect data on mobility (walking speed, collisions, and navigation errors) while using the app, and a usability study to collect user evaluation data on the perceived helpfulness, safety, ease-of-use, and overall experience while using the app. Our studies show that ASSIST is useful in providing users with navigational guidance, improving their efficiency and (more significantly) their safety and accuracy in wayfinding indoors. Findings and user feed-back from the studies confirm some of the previous results, while also providing some new insights into the creation of such an app, including the use of customized user interfaces and expanding the types of information provided. 
    more » « less