skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Who do you look like? - Gaze-based authentication for workers in VR
Behavior-based authentication methods are actively being developed for XR. In particular, gaze-based methods promise continuous authentication of remote users. However, gaze behavior depends on the task being performed. Identification rate is typically highest when comparing data from the same task. In this study, we compared authentication performance using VR gaze data during random dot viewing, 360-degree image viewing, and a nuclear training simulation. We found that within-task authentication performed best for image viewing (72%). The implication for practitioners is to integrate image viewing into a VR workflow to collect gaze data that is viable for authentication.  more » « less
Award ID(s):
2026540
PAR ID:
10354972
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
Page Range / eLocation ID:
744 to 745
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less
  2. Educational VR may help students by being more engaging or improving retention compared to traditional learning methods. However, a student can get distracted in a VR environment due to stress, mind-wandering, unwanted noise, external alerts, etc. Student eye gaze can be useful for detecting these distraction. We explore deep-learning-based approaches to detect distractions from gaze data. We designed an educational VR environment and trained three deep learning models (CNN, LSTM, and CNN-LSTM) to gauge a student’s distraction level from gaze data, using both supervised and unsupervised learning methods. Our results show that supervised learning provided better test accuracy compared to unsupervised learning methods. 
    more » « less
  3. Human efficiency in finding a target in an image has attracted the attention of machine learning researchers, but what about when no target is there? Knowing how people search in the absence of a target, and when they stop, is important for Human-computer-interaction systems attempting to predict human gaze behavior in the wild. Here we report a rigorous evaluation of target-absent search behavior using the COCO-Search18 dataset to train stateof- the-art models. We focus on two specific aims. First, we characterize the presence of a target guidance signal in target-absent search behavior by comparing it to targetpresent guidance and free viewing. We do this by comparing how well a model trained on one type of fixation behavior (target-present, target-absent, free viewing) can predict behavior in either the same or different task. To compare target-absent search to free viewing behavior we created COCO-FreeView, a dataset of free-viewing fixations for the same images used in COCO-Search18. These comparisons revealed the existence of a target guidance signal in targetabsent search, albeit one much less dominant compared to when a target actually appeared in an image, and that the target-absent guidance signal was similar to free viewing in that saliency and center bias were both weighted more than guidance from target features. Our second aim focused on the stopping criteria, a question intrinsic to target-absent search. Here we propose to train a foveated target detector whose target detection representation is sensitive to the relationship between distance from the fovea. Then combining the predicted target detection representation with other information such as fixation history and subject ID, our model outperforms the baselines in predicting when a person stops moving his attention during target-absent search. 
    more » « less
  4. null (Ed.)
    We study student experiences of social VR for remote instruction, with students attending class from home. The study evaluates student experiences when: (1) viewing remote lectures with VR headsets, (2) viewing with desktop displays, (3) presenting with VR headsets, and (4) reflecting on several weeks of VR-based class attendance. Students rated factors such as presence, social presence, simulator sickness, communication methods, avatar and application features, and tradeoffs with other remote approaches. Headset-based viewing and presenting produced higher presence than desktop viewing, but had less-clear impact on overall experience and on most social presence measures. We observed higher attentional allocation scores for headset-based presenting than for both viewing methods. For headset VR, there were strong negative correlations between simulator sickness (primarily reported as general discomfort) and ratings of co-presence, overall experience, and some other factors. This suggests that comfortable users experienced substantial benefits of headset viewing and presenting, but others did not. Based on the type of virtual environment, student ratings, and comments, reported discomfort appears related to physical ergonomic factors or technical problems. Desktop VR appears to be a good alternative for uncomfortable students, and students report that they prefer a mix of headset and desktop viewing. We additionally provide insight from students and a teacher about possible improvements for VR class technology, and we summarize student opinions comparing viewing and presenting in VR to other remote class technologies. 
    more » « less
  5. Virtual Reality (VR) headsets with embedded eye trackers are appearing as consumer devices (e.g. HTC Vive Eye, FOVE). These devices could be used in VR-based education (e.g., a virtual lab, a virtual field trip) in which a live teacher guides a group of students. The eye tracking could enable better insights into students’ activities and behavior patterns. For real-time insight, a teacher’s VR environment can display student eye gaze. These visualizations would help identify students who are confused/distracted, and the teacher could better guide them to focus on important objects. We present six gaze visualization techniques for a VR-embedded teacher’s view, and we present a user study to compare these techniques. The results suggest that a short particle trail representing eye trajectory is promising. In contrast, 3D heatmaps (an adaptation of traditional 2D heatmaps) for visualizing gaze over a short time span are problematic. 
    more » « less