skip to main content


Title: Redirecting Desktop Interface Input to Animate Cross-Reality Avatars
We present and evaluate methods to redirect desktop inputs such as eye gaze and mouse pointing to a VR-embedded avatar. We use these methods to build a novel interface that allows a desktop user to give presentations in remote VR meetings such as conferences or classrooms. Recent work on such VR meetings suggests a substantial number of users continue to use desktop interfaces due to ergonomic or technical factors. Our approach enables desk-top and immersed users to better share virtual worlds, by allowing desktop-based users to have more engaging or present "cross-reality" avatars. The described redirection methods consider mouse pointing and drawing for a presentation, eye-tracked gaze towards audience members, hand tracking for gesturing, and associated avatar motions such as head and torso movement. A study compared different levels of desktop avatar control and headset-based control. Study results suggest that users consider the enhanced desktop avatar to be human-like and lively and draw more attention than a conventionally animated desktop avatar, implying that our interface and methods could be useful for future cross-reality remote learning tools.  more » « less
Award ID(s):
1815976
NSF-PAR ID:
10338219
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2022 IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR)
Page Range / eLocation ID:
843 to 851
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues. 
    more » « less
  2. null (Ed.)
    We study student experiences of social VR for remote instruction, with students attending class from home. The study evaluates student experiences when: (1) viewing remote lectures with VR headsets, (2) viewing with desktop displays, (3) presenting with VR headsets, and (4) reflecting on several weeks of VR-based class attendance. Students rated factors such as presence, social presence, simulator sickness, communication methods, avatar and application features, and tradeoffs with other remote approaches. Headset-based viewing and presenting produced higher presence than desktop viewing, but had less-clear impact on overall experience and on most social presence measures. We observed higher attentional allocation scores for headset-based presenting than for both viewing methods. For headset VR, there were strong negative correlations between simulator sickness (primarily reported as general discomfort) and ratings of co-presence, overall experience, and some other factors. This suggests that comfortable users experienced substantial benefits of headset viewing and presenting, but others did not. Based on the type of virtual environment, student ratings, and comments, reported discomfort appears related to physical ergonomic factors or technical problems. Desktop VR appears to be a good alternative for uncomfortable students, and students report that they prefer a mix of headset and desktop viewing. We additionally provide insight from students and a teacher about possible improvements for VR class technology, and we summarize student opinions comparing viewing and presenting in VR to other remote class technologies. 
    more » « less
  3. We introduce SearchGazer, a web-based eye tracker for remote web search studies using common webcams already present in laptops and some desktop computers. SearchGazer is a pure JavaScript library that infers the gaze behavior of searchers in real time. The eye tracking model self-calibrates by watching searchers interact with the search pages and trains a mapping of eye features to gaze locations and search page elements on the screen. Contrary to typical eye tracking studies in information retrieval, this approach does not require the purchase of any additional specialized equipment, and can be done remotely in a user's natural environment, leading to cheaper and easier visual attention studies. While SearchGazer is not intended to be as accurate as specialized eye trackers, it is able to replicate many of the research findings of three seminal information retrieval papers: two that used eye tracking devices, and one that used the mouse cursor as a restricted focus viewer. Charts and heatmaps from those original papers are plotted side-by-side with SearchGazer results. While the main results are similar, there are some notable differences, which we hypothesize derive from improvements in the latest ranking technologies used by current versions of search engines and diligence by remote users. As part of this paper, we also release SearchGazer as a library that can be integrated into any search page. 
    more » « less
  4. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less
  5. This paper explains the design of a prototype desktop and augmented Virtual Reality (VR) framework as a medium to deliver instructional materials to the students in an introductory computer animation course. This framework was developed as part of a Teaching Innovation Grant to propose a cost-effective and innovative instructional frameworks to engage and stimulate students. Desktop-based virtual reality presents a 3-dimensional (3D) world using the display of a standard desktop computer available in most of the PC labs on campus. This is a required course at this university that has students not only from the primary department, but from other colleges/departments as well. Desktop VR has been chosen as a medium for this study due to the ease-of-access and affordability; this framework can be visualized and accessed with the available computers in PC labs available in university campuses. The proposed research is intended to serve as a low-cost framework that can be accessed by all students. The concepts of ‘computer graphics, modeling & animation’, instead of being presented using conventional methods such as notes or power point presentations, are presented in an interactive manner on a desktop display. This framework allows the users to interact with the objects on the display not only via the standard mouse and keyboard, but also using multiple forms of HCI such as Touchscreen, Touchpad, and 3D Mouse. Hence, the modules were developed from scratch for access via regular desktop PCs. Such a framework helps effective pedagogical strategies such as active learning (AL) and project-based learning (PBL), which are especially relevant to a highly lab-oriented course such as this course titled ‘Introduction to Animation’. Finally, the framework has also been tested on a range of VR media to check its accessibility. On the whole, this proposed framework can be used to not only teach basic modeling and animation concepts such as spatial coordinates, coordinate systems, transformation, and parametric curves, but it is also used to teach basic graphics programming concepts. Hence, instead of a touchscreen, the modules have to be developed from scratch for access via regular desktop PCs. Such a framework helps effective pedagogical strategies such as active learning (AL) and project-based learning (PBL), which are especially relevant to a highly lab-oriented course such as this animation course. 
    more » « less