skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, June 13 until 2:00 AM ET on Friday, June 14 due to maintenance. We apologize for the inconvenience.


Title: EyeShadows: Peripheral Virtual Copies for Rapid Gaze Selection and Interaction
In eye-tracked augmented and virtual reality (AR/VR), instantaneous and accurate hands-free selection of virtual elements is still a significant challenge. Though other methods that involve gaze-coupled head movements or hovering can improve selection times in comparison to methods like gaze-dwell, they are either not instantaneous or have difficulty ensuring that the user’s selection is deliberate. In this paper, we present EyeShadows, an eye gaze-based selection system that takes advantage of peripheral copies (shadows) of items that allow for quick selection and manipulation of an object or corresponding menus. This method is compatible with a variety of different selection tasks and controllable items, avoids the Midas touch problem, does not clutter the virtual environment, and is context sensitive. We have implemented and refined this selection tool for VR and AR, including testing with optical and video see-through (OST/VST) displays. Moreover, we demonstrate that this method can be used for a wide range of AR and VR applications, including manipulation of sliders or analog elements. We test its performance in VR against three other selection techniques, including dwell (baseline), an inertial reticle, and head-coupled selection. Results showed that selection with EyeShadows was significantly faster than dwell (baseline), outperforming in the select and search and select tasks by 29.8% and 15.7%, respectively, though error rates varied between tasks.  more » « less
Award ID(s):
2306109
NSF-PAR ID:
10502875
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)
Page Range / eLocation ID:
681 to 689
Format(s):
Medium: X
Location:
Orlando, FL, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Extended reality (XR) technologies, such as virtual reality (VR) and augmented reality (AR), provide users, their avatars, and embodied agents a shared platform to collaborate in a spatial context. Although traditional face-to-face communication is limited by users’ proximity, meaning that another human’s non-verbal embodied cues become more difficult to perceive the farther one is away from that person, researchers and practitioners have started to look into ways to accentuate or amplify such embodied cues and signals to counteract the effects of distance with XR technologies. In this article, we describe and evaluate the Big Head technique, in which a human’s head in VR/AR is scaled up relative to their distance from the observer as a mechanism for enhancing the visibility of non-verbal facial cues, such as facial expressions or eye gaze. To better understand and explore this technique, we present two complimentary human-subject experiments in this article. In our first experiment, we conducted a VR study with a head-mounted display to understand the impact of increased or decreased head scales on participants’ ability to perceive facial expressions as well as their sense of comfort and feeling of “uncannniness” over distances of up to 10 m. We explored two different scaling methods and compared perceptual thresholds and user preferences. Our second experiment was performed in an outdoor AR environment with an optical see-through head-mounted display. Participants were asked to estimate facial expressions and eye gaze, and identify a virtual human over large distances of 30, 60, and 90 m. In both experiments, our results show significant differences in minimum, maximum, and ideal head scales for different distances and tasks related to perceiving faces, facial expressions, and eye gaze, and we also found that participants were more comfortable with slightly bigger heads at larger distances. We discuss our findings with respect to the technologies used, and we discuss implications and guidelines for practical applications that aim to leverage XR-enhanced facial cues. 
    more » « less
  2. Abstract

    Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.

     
    more » « less
  3. Emerging Virtual Reality (VR) displays with embedded eye trackers are currently becoming a commodity hardware (e.g., HTC Vive Pro Eye). Eye-tracking data can be utilized for several purposes, including gaze monitoring, privacy protection, and user authentication/identification. Identifying users is an integral part of many applications due to security and privacy concerns. In this paper, we explore methods and eye-tracking features that can be used to identify users. Prior VR researchers explored machine learning on motion-based data (such as body motion, head tracking, eye tracking, and hand tracking data) to identify users. Such systems usually require an explicit VR task and many features to train the machine learning model for user identification. We propose a system to identify users utilizing minimal eye-gaze-based features without designing any identification-specific tasks. We collected gaze data from an educational VR application and tested our system with two machine learning (ML) models, random forest (RF) and k-nearest-neighbors (kNN), and two deep learning (DL) models: convolutional neural networks (CNN) and long short-term memory (LSTM). Our results show that ML and DL models could identify users with over 98% accuracy with only six simple eye-gaze features. We discuss our results, their implications on security and privacy, and the limitations of our work. 
    more » « less
  4. Busjahn et al. [4] on the factors influencing dwell time during source code reading, where source code element type and frequency of gaze visits are studied as factors. Unlike the previous study, this study focuses on analyzing eye movement data in large open source Java projects. Five experts and thirteen novices participated in the study where the main task is to summarize methods. The results examine semantic line-level information that developers view during summarization. We find no correlation between the line length and the total duration of time spent looking on the line even though it exists between a token’s length and the total fixation time on the token reported in prior work. The first fixations inside a method are more likely to be on a method’s signature, a variable declaration, or an assignment compared to the other fixations inside a method. In addition, it is found that smaller methods tend to have shorter overall fixation duration for the entire method, but have significantly longer duration per line in the method. The analysis provides insights into how source code’s unique characteristics can help in building more robust methods for analyzing eye movements in source code and overall in building theories to support program comprehension on realistic tasks. 
    more » « less
  5. J. Y. C., Chen (Ed.)
    Controlling and standardizing experiments is imperative for quantitative research methods. With the increase in the availability and quantity of low-cost eye-tracking devices, gaze data are considered as an important user input for quantitative analysis in many social science research areas, especially incorporating with virtual reality (VR) and augmented reality (AR) technologies. This poses new challenges in providing a default interface for gaze data in a common method. This paper propose GazeXR, which focuses on designing a general eye-tracking system interfacing two eye-tracking devices and creating a hardware independent virtual environment. We apply GazeXR to the in-class teaching experience analysis use case using external eye-tracking hardware to collect the gaze data for the gaze track analysis. 
    more » « less