skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, June 13 until 2:00 AM ET on Friday, June 14 due to maintenance. We apologize for the inconvenience.


Title: Deep Learning on Eye Gaze Data to Classify Student Distraction Level in an Educational VR Environment
Educational VR may increase engagement and retention compared to traditional learning, for some topics or students. However, a student could still get distracted and disengaged due to stress, mind-wandering, unwanted noise, external alerts, etc. Student eye gaze can be useful for detecting distraction. For example, we previously considered gaze visualizations to help teachers understand student attention to better identify or guide distracted students. However, it is not practical for a teacher to monitor a large numbers of student indicators while teaching. To help filter students based on distraction level, we consider a deep learning approach to detect distraction from gaze data. The key aspects are: (1) we created a labeled eye gaze dataset (3.4M data points) from an educational VR environment, (2) we propose an automatic system to gauge a student's distraction level from gaze data, and (3) we apply and compare three deep neural classifiers for this purpose. A proposed CNN-LSTM classifier achieved an accuracy of 89.8\% for classifying distraction, per educational activity section, into one of three levels.  more » « less
Award ID(s):
1815976
NSF-PAR ID:
10338209
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
International Conference on Artificial Reality and Telexistence Eurographics Symposium on Virtual Environments (ICAT-EGVE)
ISSN:
1727-530X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Educational VR may help students by being more engaging or improving retention compared to traditional learning methods. However, a student can get distracted in a VR environment due to stress, mind-wandering, unwanted noise, external alerts, etc. Student eye gaze can be useful for detecting these distraction. We explore deep-learning-based approaches to detect distractions from gaze data. We designed an educational VR environment and trained three deep learning models (CNN, LSTM, and CNN-LSTM) to gauge a student’s distraction level from gaze data, using both supervised and unsupervised learning methods. Our results show that supervised learning provided better test accuracy compared to unsupervised learning methods. 
    more » « less
  2. VR displays (HMDs) with embedded eye trackers could enable better teacher-guided VR applications since eye tracking could provide insights into student’s activities and behavior patterns. We present several techniques to visualize eye-gaze data of the students to help a teacher gauge student attention level. A teacher could then better guide students to focus on the object of interest in the VR environment if their attention drifts and they get distracted or confused. 
    more » « less
  3. Virtual Reality (VR) headsets with embedded eye trackers are appearing as consumer devices (e.g. HTC Vive Eye, FOVE). These devices could be used in VR-based education (e.g., a virtual lab, a virtual field trip) in which a live teacher guides a group of students. The eye tracking could enable better insights into students’ activities and behavior patterns. For real-time insight, a teacher’s VR environment can display student eye gaze. These visualizations would help identify students who are confused/distracted, and the teacher could better guide them to focus on important objects. We present six gaze visualization techniques for a VR-embedded teacher’s view, and we present a user study to compare these techniques. The results suggest that a short particle trail representing eye trajectory is promising. In contrast, 3D heatmaps (an adaptation of traditional 2D heatmaps) for visualizing gaze over a short time span are problematic. 
    more » « less
  4. null (Ed.)
    An extensive theoretical and empirical literature stresses the challenges of online learning, especially among students enrolled in open-access institutions who often struggle more due to job and family commitments and a lack of self-regulated learning skills. As online expansion continues in higher education, understanding the specific challenges students encounter in online coursework, and learning strategies that can help them cope with these challenges, can provide valuable insights to be widely shared. Using open-ended survey data collected from 365 students at a state community college system, this study examined students’ perceptions of challenges of online learning that may lead to undesirable learning outcomes and specific strategies they found effective in addressing these challenges. We combined structural topic modeling and human coding in analyzing student responses. Three sets of challenges—including insufficient time management skills, greater tendencies of multitasking and being distracted in an online learning environment, and ineffective interaction and frustrations with help-seeking—emerged from student responses. In response to these challenges, students reflected on ways to improve online learning experiences and outcomes, including improving time management skills, maintaining an organized and distraction-free study environment, proactively seeking help, and using study strategies to improve learning effectiveness. 
    more » « less
  5. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less