This paper demonstrates the utility of ambient-focal attention and pupil dilation dynamics to describe visual processing of emotional facial expressions. Pupil dilation and focal eye movements reflect deeper cognitive processing and thus shed more light on the dy- namics of emotional expression recognition. Socially anxious in- dividuals (N = 24) and non-anxious controls (N = 24) were asked to recognize emotional facial expressions that gradually morphed from a neutral expression to one of happiness, sadness, or anger in 10-sec animations. Anxious cohorts exhibited more ambient face scanning than their non-anxious counterparts. We observed a positive relationship between focal fixations and pupil dilation, indi- cating deeper processing of viewed faces, but only by non-anxious participants, and only during the last phase of emotion recognition. Group differences in the dynamics of ambient-focal attention sup- port the hypothesis of vigilance to emotional expression processing by socially anxious individuals. We discuss the results by referring to current literature on cognitive psychopathology.
more »
« less
Eye movements as predictors of student experiences during nursing simulation learning events
Abstract Although the “eye-mind link” hypothesis posits that eye movements provide a direct window into cognitive processing, linking eye movements to specific cognitions in real-world settings remains challenging. This challenge may arise because gaze metrics such as fixation duration, pupil size, and saccade amplitude are often aggregated across timelines that include heterogeneous events. To address this, we tested whether aggregating gaze parameters across participant-defined events could support the hypothesis that increased focal processing, indicated by greater gaze duration and pupil diameter, and decreased scene exploration, indicated by smaller saccade amplitude, would predict effective task performance. Using head-mounted eye trackers, nursing students engaged in simulation learning and later segmented their simulation footage into meaningful events, categorizing their behaviors, task outcomes, and cognitive states at the event level. Increased fixation duration and pupil diameter predicted higher student-rated teamwork quality, while increased pupil diameter predicted judgments of effective communication. Additionally, increased saccade amplitude positively predicted students’ perceived self-efficacy. These relationships did not vary across event types, and gaze parameters did not differ significantly between the beginning, middle, and end of events. However, there was a significant increase in fixation duration during the first five seconds of an event compared to the last five seconds of the previous event, suggesting an initial encoding phase at an event boundary. In conclusion, event-level gaze parameters serve as valid indicators of focal processing and scene exploration in natural learning environments, generalizing across event types.
more »
« less
- Award ID(s):
- 2418602
- PAR ID:
- 10611783
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Cognitive Research: Principles and Implications
- Volume:
- 10
- Issue:
- 1
- ISSN:
- 2365-7464
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.more » « less
-
Abstract Infants vary in their ability to follow others’ gazes, but it is unclear how these individual differences emerge. We tested whether social motivation levels in early infancy predict later gaze following skills. We longitudinally tracked infants’ (N = 82) gazes and pupil dilation while they observed videos of a woman looking into the camera simulating eye contact (i.e., mutual gaze) and then gazing toward one of two objects, at 2, 4, 6, 8, and 14 months of age. To improve measurement validity, we used confirmatory factor analysis to combine multiple observed measures to index the underlying constructs of social motivation and gaze following. Infants’ social motivation—indexed by their speed of social orienting, duration of mutual gaze, and degree of pupil dilation during mutual gaze—was developmentally stable and positively predicted the development of gaze following—indexed by their proportion of time looking to the target object, first object look difference scores, and first face‐to‐object saccade difference scores—from 6 to 14 months of age. These findings suggest that infants’ social motivation likely plays a role in the development of gaze following and highlight the use of a multi‐measure approach to improve measurement sensitivity and validity in infancy research.more » « less
-
Abstract This manuscript presents GazeBase, a large-scale longitudinal dataset containing 12,334 monocular eye-movement recordings captured from 322 college-aged participants. Participants completed a battery of seven tasks in two contiguous sessions during each round of recording, including a – (1) fixation task, (2) horizontal saccade task, (3) random oblique saccade task, (4) reading task, (5/6) free viewing of cinematic video task, and (7) gaze-driven gaming task. Nine rounds of recording were conducted over a 37 month period, with participants in each subsequent round recruited exclusively from prior rounds. All data was collected using an EyeLink 1000 eye tracker at a 1,000 Hz sampling rate, with a calibration and validation protocol performed before each task to ensure data quality. Due to its large number of participants and longitudinal nature, GazeBase is well suited for exploring research hypotheses in eye movement biometrics, along with other applications applying machine learning to eye movement signal analysis. Classification labels produced by the instrument’s real-time parser are provided for a subset of GazeBase, along with pupil area.more » « less
-
Abstract Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.more » « less
An official website of the United States government
