skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: GazeBase, a large-scale, multi-stimulus, longitudinal eye movement dataset
Abstract This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a – (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument’s real-time parser are provided for a subset of GazeBase, along with pupil area.  more » « less
Award ID(s):
1714623
PAR ID:
10276003
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Data
Volume:
8
Issue:
1
ISSN:
2052-4463
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. BackgroundUpper limb proprioceptive impairments are common after stroke and affect daily function. Recent work has shown that stroke survivors have difficulty using visual information to improve proprioception. It is unclear how eye movements are impacted to guide action of the arm after stroke. Here, we aimed to understand how upper limb proprioceptive impairments impact eye movements in individuals with stroke. MethodsControl (N = 20) and stroke participants (N = 20) performed a proprioceptive matching task with upper limb and eye movements. A KINARM exoskeleton with eye tracking was used to assess limb and eye kinematics. The upper limb was passively moved by the robot and participants matched the location with either an arm or eye movement. Accuracy was measured as the difference between passive robot movement location and active limb matching (Hand-End Point Error) or active eye movement matching (Eye-End Point Error). ResultsWe found that individuals with stroke had significantly larger Hand (2.1×) and Eye-End Point (1.5×) Errors compared to controls. Further, we found that proprioceptive errors of the hand and eye were highly correlated in stroke participants ( r = .67, P = .001), a relationship not observed for controls. ConclusionsEye movement accuracy declined as a function of proprioceptive impairment of the more-affected limb, which was used as a proprioceptive reference. The inability to use proprioceptive information of the arm to coordinate eye movements suggests that disordered proprioception impacts integration of sensory information across different modalities. These results have important implications for how vision is used to actively guide limb movement during rehabilitation. 
    more » « less
  2. Reading is a highly complex learned skill in which humans move their eyes three to four times every second in response to visual and cognitive processing. The consensus view is that the details of these rapid eye-movement decisions—which part of a word to target with a saccade—are determined solely by low-level oculomotor heuristics. But maximally efficient saccade targeting would be sensitive to ongoing word identification, sending the eyes farther into a word the farther its identification has already progressed. Here, using a covert text-shifting paradigm, we showed just such a statistical relationship between saccade targeting in reading and trial-to-trial variability in cognitive processing. This result suggests that, rather than relying purely on heuristics, the human brain has learned to optimize eye movements in reading even at the fine-grained level of character-position targeting, reflecting efficiency-based sensitivity to ongoing cognitive processing. 
    more » « less
  3. Team member inclusion is vital in collaborative teams. In this work, we explore two strategies to increase the inclusion of human team members in a human-robot team: 1) giving a person in the group a specialized role (the 'robot liaison') and 2) having the robot verbally support human team members. In a human subjects experiment (N = 26 teams, 78 participants), groups of three participants completed two rounds of a collaborative task. In round one, two participants (ingroup) completed a task with a robot in one room, and one participant (outgroup) completed the same task with a robot in a different room. In round two, all three participants and one robot completed a second task in the same room, where one participant was designated as the robot liaison. During round two, the robot verbally supported each participant 6 times on average. Results show that participants with the robot liaison role had a lower perceived group inclusion than the other group members. Additionally, when outgroup members were the robot liaison, the group was less likely to incorporate their ideas into the group's final decision. In response to the robot's supportive utterances, outgroup members, and not ingroup members, showed an increase in the proportion of time they spent talking to the group. Our results suggest that specialized roles may hinder human team member inclusion, whereas supportive robot utterances show promise in encouraging contributions from individuals who feel excluded. 
    more » « less
  4. Abstract Although the “eye-mind link” hypothesis posits that eye movements provide a direct window into cognitive processing, linking eye movements to specific cognitions in real-world settings remains challenging. This challenge may arise because gaze metrics such as fixation duration, pupil size, and saccade amplitude are often aggregated across timelines that include heterogeneous events. To address this, we tested whether aggregating gaze parameters across participant-defined events could support the hypothesis that increased focal processing, indicated by greater gaze duration and pupil diameter, and decreased scene exploration, indicated by smaller saccade amplitude, would predict effective task performance. Using head-mounted eye trackers, nursing students engaged in simulation learning and later segmented their simulation footage into meaningful events, categorizing their behaviors, task outcomes, and cognitive states at the event level. Increased fixation duration and pupil diameter predicted higher student-rated teamwork quality, while increased pupil diameter predicted judgments of effective communication. Additionally, increased saccade amplitude positively predicted students’ perceived self-efficacy. These relationships did not vary across event types, and gaze parameters did not differ significantly between the beginning, middle, and end of events. However, there was a significant increase in fixation duration during the first five seconds of an event compared to the last five seconds of the previous event, suggesting an initial encoding phase at an event boundary. In conclusion, event-level gaze parameters serve as valid indicators of focal processing and scene exploration in natural learning environments, generalizing across event types. 
    more » « less
  5. A powerful operational paradigm for distributed quantum information processing involves manipulating pre-shared entanglement by local operations and classical communication (LOCC). The LOCC round complexity of a given task describes how many rounds of classical communication are needed to complete the task. Despite some results separating one-round versus two-round protocols, very little is known about higher round complexities. In this paper, we revisit the task of one-shot random-party entanglement distillation as a way to highlight some interesting features of LOCC round complexity. We first show that for random-party distillation in three qubits, the number of communication rounds needed in an optimal protocol depends on the entanglement measure used; for the same fixed state some entanglement measures need only two rounds to maximize whereas others need an unbounded number of rounds. In doing so, we construct a family of LOCC instruments that require an unbounded number of rounds to implement. We then prove explicit tight lower bounds on the LOCC round number as a function of distillation success probability. Our calculations show that the original W-state random distillation protocol by Fortescue and Lo is essentially optimal in terms of round complexity. 
    more » « less