skip to main content

This content will become publicly available on March 1, 2023

Title: Redirecting Desktop Interface Input to Animate Cross-Reality Avatars
We present and evaluate methods to redirect desktop inputs such as eye gaze and mouse pointing to a VR-embedded avatar. We use these methods to build a novel interface that allows a desktop user to give presentations in remote VR meetings such as conferences or classrooms. Recent work on such VR meetings suggests a substantial number of users continue to use desktop interfaces due to ergonomic or technical factors. Our approach enables desk-top and immersed users to better share virtual worlds, by allowing desktop-based users to have more engaging or present "cross-reality" avatars. The described redirection methods consider mouse pointing and drawing for a presentation, eye-tracked gaze towards audience members, hand tracking for gesturing, and associated avatar motions such as head and torso movement. A study compared different levels of desktop avatar control and headset-based control. Study results suggest that users consider the enhanced desktop avatar to be human-like and lively and draw more attention than a conventionally animated desktop avatar, implying that our interface and methods could be useful for future cross-reality remote learning tools.
Authors:
; ;
Award ID(s):
1815976
Publication Date:
NSF-PAR ID:
10338219
Journal Name:
2022 IEEE Conference on Virtual Reality and 3D User Interfaces (IEEE VR)
Page Range or eLocation-ID:
843 to 851
Sponsoring Org:
National Science Foundation
More Like this
  1. We study student experiences of social VR for remote instruction, with students attending class from home. The study evaluates student experiences when: (1) viewing remote lectures with VR headsets, (2) viewing with desktop displays, (3) presenting with VR headsets, and (4) reflecting on several weeks of VR-based class attendance. Students rated factors such as presence, social presence, simulator sickness, communication methods, avatar and application features, and tradeoffs with other remote approaches. Headset-based viewing and presenting produced higher presence than desktop viewing, but had less-clear impact on overall experience and on most social presence measures. We observed higher attentional allocation scores for headset-based presenting than for both viewing methods. For headset VR, there were strong negative correlations between simulator sickness (primarily reported as general discomfort) and ratings of co-presence, overall experience, and some other factors. This suggests that comfortable users experienced substantial benefits of headset viewing and presenting, but others did not. Based on the type of virtual environment, student ratings, and comments, reported discomfort appears related to physical ergonomic factors or technical problems. Desktop VR appears to be a good alternative for uncomfortable students, and students report that they prefer a mix of headset and desktop viewing. We additionally providemore »insight from students and a teacher about possible improvements for VR class technology, and we summarize student opinions comparing viewing and presenting in VR to other remote class technologies.« less
  2. We introduce SearchGazer, a web-based eye tracker for remote web search studies using common webcams already present in laptops and some desktop computers. SearchGazer is a pure JavaScript library that infers the gaze behavior of searchers in real time. The eye tracking model self-calibrates by watching searchers interact with the search pages and trains a mapping of eye features to gaze locations and search page elements on the screen. Contrary to typical eye tracking studies in information retrieval, this approach does not require the purchase of any additional specialized equipment, and can be done remotely in a user's natural environment, leading to cheaper and easier visual attention studies. While SearchGazer is not intended to be as accurate as specialized eye trackers, it is able to replicate many of the research findings of three seminal information retrieval papers: two that used eye tracking devices, and one that used the mouse cursor as a restricted focus viewer. Charts and heatmaps from those original papers are plotted side-by-side with SearchGazer results. While the main results are similar, there are some notable differences, which we hypothesize derive from improvements in the latest ranking technologies used by current versions of search engines and diligence bymore »remote users. As part of this paper, we also release SearchGazer as a library that can be integrated into any search page.« less
  3. This paper explains the design of a prototype desktop and augmented Virtual Reality (VR) framework as a medium to deliver instructional materials to the students in an introductory computer animation course. This framework was developed as part of a Teaching Innovation Grant to propose a cost-effective and innovative instructional frameworks to engage and stimulate students. Desktop-based virtual reality presents a 3-dimensional (3D) world using the display of a standard desktop computer available in most of the PC labs on campus. This is a required course at this university that has students not only from the primary department, but from other colleges/departments as well. Desktop VR has been chosen as a medium for this study due to the ease-of-access and affordability; this framework can be visualized and accessed with the available computers in PC labs available in university campuses. The proposed research is intended to serve as a low-cost framework that can be accessed by all students. The concepts of ‘computer graphics, modeling & animation’, instead of being presented using conventional methods such as notes or power point presentations, are presented in an interactive manner on a desktop display. This framework allows the users to interact with the objects on themore »display not only via the standard mouse and keyboard, but also using multiple forms of HCI such as Touchscreen, Touchpad, and 3D Mouse. Hence, the modules were developed from scratch for access via regular desktop PCs. Such a framework helps effective pedagogical strategies such as active learning (AL) and project-based learning (PBL), which are especially relevant to a highly lab-oriented course such as this course titled ‘Introduction to Animation’. Finally, the framework has also been tested on a range of VR media to check its accessibility. On the whole, this proposed framework can be used to not only teach basic modeling and animation concepts such as spatial coordinates, coordinate systems, transformation, and parametric curves, but it is also used to teach basic graphics programming concepts. Hence, instead of a touchscreen, the modules have to be developed from scratch for access via regular desktop PCs. Such a framework helps effective pedagogical strategies such as active learning (AL) and project-based learning (PBL), which are especially relevant to a highly lab-oriented course such as this animation course.« less
  4. Background Sustained engagement is essential for the success of telerehabilitation programs. However, patients’ lack of motivation and adherence could undermine these goals. To overcome this challenge, physical exercises have often been gamified. Building on the advantages of serious games, we propose a citizen science–based approach in which patients perform scientific tasks by using interactive interfaces and help advance scientific causes of their choice. This approach capitalizes on human intellect and benevolence while promoting learning. To further enhance engagement, we propose performing citizen science activities in immersive media, such as virtual reality (VR). Objective This study aims to present a novel methodology to facilitate the remote identification and classification of human movements for the automatic assessment of motor performance in telerehabilitation. The data-driven approach is presented in the context of a citizen science software dedicated to bimanual training in VR. Specifically, users interact with the interface and make contributions to an environmental citizen science project while moving both arms in concert. Methods In all, 9 healthy individuals interacted with the citizen science software by using a commercial VR gaming device. The software included a calibration phase to evaluate the users’ range of motion along the 3 anatomical planes of motion andmore »to adapt the sensitivity of the software’s response to their movements. During calibration, the time series of the users’ movements were recorded by the sensors embedded in the device. We performed principal component analysis to identify salient features of movements and then applied a bagged trees ensemble classifier to classify the movements. Results The classification achieved high performance, reaching 99.9% accuracy. Among the movements, elbow flexion was the most accurately classified movement (99.2%), and horizontal shoulder abduction to the right side of the body was the most misclassified movement (98.8%). Conclusions Coordinated bimanual movements in VR can be classified with high accuracy. Our findings lay the foundation for the development of motion analysis algorithms in VR-mediated telerehabilitation.« less
  5. We present a personalized, comprehensive eye-tracking solution based on tracking higher-order Purkinje images, suited explicitly for eyeglasses-style AR and VR displays. Existing eye-tracking systems for near-eye applications are typically designed to work for an on-axis configuration and rely on pupil center and corneal reflections (PCCR) to estimate gaze with an accuracy of only about 0.5°to 1°. These are often expensive, bulky in form factor, and fail to estimate monocular accommodation, which is crucial for focus adjustment within the AR glasses. Our system independently measures the binocular vergence and monocular accommodation using higher-order Purkinje reflections from the eye, extending the PCCR based methods. We demonstrate that these reflections are sensitive to both gaze rotation and lens accommodation and model the Purkinje images’ behavior in simulation. We also design and fabricate a user-customized eye tracker using cheap off-the-shelf cameras and LEDs. We use an end-to-end convolutional neural network (CNN) for calibrating the eye tracker for the individual user, allowing for robust and simultaneous estimation of vergence and accommodation. Experimental results show that our solution, specifically catering to individual users, outperforms state-of-the-art methods for vergence and depth estimation, achieving an accuracy of 0.3782°and 1.108 cm respectively.