- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- ACM symposium on Eye Tracking Research and Applications
- Page Range or eLocation-ID:
- 1 to 5
- Sponsoring Org:
- National Science Foundation
More Like this
Virtual Reality (VR) headsets with embedded eye trackers are appearing as consumer devices (e.g. HTC Vive Eye, FOVE). These devices could be used in VR-based education (e.g., a virtual lab, a virtual field trip) in which a live teacher guides a group of students. The eye tracking could enable better insights into students’ activities and behavior patterns. For real-time insight, a teacher’s VR environment can display student eye gaze. These visualizations would help identify students who are confused/distracted, and the teacher could better guide them to focus on important objects. We present six gaze visualization techniques for a VR-embedded teacher’s view, and we present a user study to compare these techniques. The results suggest that a short particle trail representing eye trajectory is promising. In contrast, 3D heatmaps (an adaptation of traditional 2D heatmaps) for visualizing gaze over a short time span are problematic.
Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lens-based cameras; (2) high communication cost required between the camera and backend processor; and (3) potentially concerned low visual privacy, thus prohibiting their more extensive applications. To this end, we propose, develop, and validate a lensless FlatCambased eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efficiency without sacrificing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams instead of lens-based cameras to facilitate the small form-factor need in mobile eye tracking systems, which also leaves rooms for a dedicated sensing-processor co-design to reduce the required camera-processor communication latency. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that first predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations andmore »
VR displays (HMDs) with embedded eye trackers could enable better teacher-guided VR applications since eye tracking could provide insights into student’s activities and behavior patterns. We present several techniques to visualize eye-gaze data of the students to help a teacher gauge student attention level. A teacher could then better guide students to focus on the object of interest in the VR environment if their attention drifts and they get distracted or confused.
Virtual reality (VR) is a relatively new and rapidly growing field which is becoming accessible by the larger research community as well as being commercially available for en- tertainment. Relatively cheap and commercially available head mounted displays (HMDs) are the largest reason for this increase in availability. This work uses Unity and an HMD to create a VR environment to display a 360◦video of a pre-recorded patient handoff be- tween a nurse and doctor. The VR environment went through different designs while in development. This works discusses each stage of it’s design and the unique challenges we encountered during development. This work also discusses the implementation of the user study and the visualization of collected eye tracking data.
Eye-tracking is a critical source of information for understanding human behavior and developing future mixed-reality technology. Eye-tracking enables applications that classify user activity or predict user intent. However, eye-tracking datasets collected during common virtual reality tasks have also been shown to enable unique user identification, which creates a privacy risk. In this paper, we focus on the problem of user re-identification from eye-tracking features. We adapt standardized privacy definitions of k-anonymity and plausible deniability to protect datasets of eye-tracking features, and evaluate performance against re-identification by a standard biometric identification model on seven VR datasets. Our results demonstrate that re-identification goes down to chance levels for the privatized datasets, even as utility is preserved to levels higher than 72% accuracy in document type classification.