skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Integrating neural and ocular attention reorienting signals in virtual reality
Abstract Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.  more » « less
Award ID(s):
1816363
PAR ID:
10361339
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
Journal of Neural Engineering
Volume:
18
Issue:
6
ISSN:
1741-2560
Page Range / eLocation ID:
Article No. 066052
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Neural, physiological, and behavioral signals synchronize between human subjects in a variety of settings. Multiple hypotheses have been proposed to explain this interpersonal synchrony, but there is no clarity under which conditions it arises, for which signals, or whether there is a common underlying mechanism. We hypothesized that cognitive processing of a shared stimulus is the source of synchrony between subjects, measured here as intersubject correlation (ISC). To test this, we presented informative videos to participants in an attentive and distracted condition and subsequently measured information recall. ISC was observed for electro-encephalography, gaze position, pupil size, and heart rate, but not respiration and head movements. The strength of correlation was co-modulated in the different signals, changed with attentional state, and predicted subsequent recall of information presented in the videos. There was robust within-subject coupling between brain, heart, and eyes, but not respiration or head movements. The results suggest that ISC is the result of effective cognitive processing, and thus emerges only for those signals that exhibit a robust brain–body connection. While physiological and behavioral fluctuations may be driven by multiple features of the stimulus, correlation with other individuals is co-modulated by the level of attentional engagement with the stimulus. 
    more » « less
  2. In eye-tracked augmented and virtual reality (AR/VR), instantaneous and accurate hands-free selection of virtual elements is still a significant challenge. Though other methods that involve gaze-coupled head movements or hovering can improve selection times in comparison to methods like gaze-dwell, they are either not instantaneous or have difficulty ensuring that the user’s selection is deliberate. In this paper, we present EyeShadows, an eye gaze-based selection system that takes advantage of peripheral copies (shadows) of items that allow for quick selection and manipulation of an object or corresponding menus. This method is compatible with a variety of different selection tasks and controllable items, avoids the Midas touch problem, does not clutter the virtual environment, and is context sensitive. We have implemented and refined this selection tool for VR and AR, including testing with optical and video see-through (OST/VST) displays. Moreover, we demonstrate that this method can be used for a wide range of AR and VR applications, including manipulation of sliders or analog elements. We test its performance in VR against three other selection techniques, including dwell (baseline), an inertial reticle, and head-coupled selection. Results showed that selection with EyeShadows was significantly faster than dwell (baseline), outperforming in the select and search and select tasks by 29.8% and 15.7%, respectively, though error rates varied between tasks. 
    more » « less
  3. Abstract Unconscious neural activity has been shown to precede both motor and cognitive acts. In the present study, we investigated the neural antecedents of overt attention during visual search, where subjects make voluntary saccadic eye movements to search a cluttered stimulus array for a target item. Building on studies of both overt self-generated motor actions (Lau et al., 2004, Soon et al., 2008) and self-generated cognitive actions (Bengson et al., 2014, Soon et al., 2013), we hypothesized that brain activity prior to the onset of a search array would predict the direction of the first saccade during unguided visual search. Because both spatial attention and gaze are coordinated during visual search, both cognition and motor actions are coupled during visual search. A well-established finding in fMRI studies of willed action is that neural antecedents of the intention to make a motor act (e.g., reaching) can be identified seconds before the action occurs. Studies of the volitional control ofcovertspatial attention in EEG have shown that predictive brain activity is limited to only a few hundred milliseconds before a voluntary shift of covert spatial attention. In the present study, the visual search task and stimuli were designed so that subjects could not predict the onset of the search array. Perceptual task difficulty was high, such that they could not locate the target using covert attention alone, thus requiring overt shifts of attention (saccades) to carry out the visual search. If the first saccade to the array onset in unguided visual search shares mechanisms with willed shifts of covert attention, we expected predictive EEG alpha-band activity (8-12 Hz) immediately prior to the array onset (within 1 sec) (Bengson et al., 2014; Nadra et al., 2023). Alternatively, if they follow the principles of willed motor actions, predictive neural signals should be reflected in broadband EEG activity (Libet et al., 1983) and would likely emerge earlier (Soon et al., 2008). Applying support vector machine decoding, we found that the direction of the first saccade in an unguided visual search could be predicted up to two seconds preceding the search array’s onset in the broadband but not alpha-band EEG. These findings suggest that self-directed eye movements in visual search emerge from early preparatory neural activity more akin to willed motor actions than to covert willed attention. This highlights a distinct role for unconscious neural dynamics in shaping visual search behavior. 
    more » « less
  4. Abstract Real-world work environments require operators to perform multiple tasks with continual support from an automated system. Eye movement is often used as a surrogate measure of operator attention, yet conventional summary measures such as percent dwell time do not capture dynamic transitions of attention in complex visual workspace. This study analyzed eye movement data collected in a controlled a MATB-II task environment using gaze transition entropy analysis. In the study, human subjects performed a compensatory tracking task, a system monitoring task, and a communication task concurrently. The results indicate that both gaze transition entropy and stationary gaze entropy, measures of randomness in eye movements, decrease when the compensatory tracking task required more continuous monitoring. The findings imply that gaze transition entropy reflects attention allocation of operators performing dynamic operational tasks consistently. 
    more » « less
  5. Abstract Although the “eye-mind link” hypothesis posits that eye movements provide a direct window into cognitive processing, linking eye movements to specific cognitions in real-world settings remains challenging. This challenge may arise because gaze metrics such as fixation duration, pupil size, and saccade amplitude are often aggregated across timelines that include heterogeneous events. To address this, we tested whether aggregating gaze parameters across participant-defined events could support the hypothesis that increased focal processing, indicated by greater gaze duration and pupil diameter, and decreased scene exploration, indicated by smaller saccade amplitude, would predict effective task performance. Using head-mounted eye trackers, nursing students engaged in simulation learning and later segmented their simulation footage into meaningful events, categorizing their behaviors, task outcomes, and cognitive states at the event level. Increased fixation duration and pupil diameter predicted higher student-rated teamwork quality, while increased pupil diameter predicted judgments of effective communication. Additionally, increased saccade amplitude positively predicted students’ perceived self-efficacy. These relationships did not vary across event types, and gaze parameters did not differ significantly between the beginning, middle, and end of events. However, there was a significant increase in fixation duration during the first five seconds of an event compared to the last five seconds of the previous event, suggesting an initial encoding phase at an event boundary. In conclusion, event-level gaze parameters serve as valid indicators of focal processing and scene exploration in natural learning environments, generalizing across event types. 
    more » « less