skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Nonlinear relationships between eye gaze and recognition accuracy for ethnic ingroup and outgroup faces.
Researchers have used eye-tracking measures to explore the relationship between face encoding and recognition, including the impact of ethnicity on this relationship. Previous studies offer a variety of conflicting conclusions. This confusion may stem from misestimation of the relationship between encoding and recognition. First, most previous models fail to account for the structure of eye-tracking data, potentially falling prey to Simpson’s paradox. Second, previous models assume a linear relationship between attention (e.g., the number of fixations to a to-be-remembered face) and recognition accuracy. Two eye-tracking studies (Ns = 41, 59), one online experiment that manipulates exposure (N = 150), and a mega-analysis examine the effects of ethnicity using what we believe to be more appropriate analytical models. Across studies and measures, we document a novel, critical pattern: The relationship between attention and recognition is nonlinear and negatively accelerating. At low levels of baseline attention, a small increment in attention improves recognition. However, as attention increases further, increments yield smaller and smaller benefits. This finding parallels work in learning and memory. In models that allow for nonlinearity, we find evidence that central features (eyes, nose, and mouth) generally contribute to recognition accuracy, potentially resolving disagreements in the field. We also find that the effects of attention on recognition are similar for ingroup and outgroup faces, which have important implications for theories of perceptual expertise.  more » « less
Award ID(s):
2141328 1946788
PAR ID:
10595582
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
American Psychological Association
Date Published:
Journal Name:
Journal of Personality and Social Psychology
ISSN:
0022-3514
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Virtual reality (VR) simulations have been adopted to provide controllable environments for running augmented reality (AR) experiments in diverse scenarios. However, insufficient research has explored the impact of AR applications on users, especially their attention patterns, and whether VR simulations accurately replicate these effects. In this work, we propose to analyze user attention patterns via eye tracking during XR usage. To represent applications that provide both helpful guidance and irrelevant information, we built a Sudoku Helper app that includes visual hints and potential distractions during the puzzle-solving period. We conducted two user studies with 19 different users each in AR and VR, in which we collected eye tracking data, conducted gaze-based analysis, and trained machine learning (ML) models to predict user attentional states and attention control ability. Our results show that the AR app had a statistically significant impact on enhancing attention by increasing the fixated proportion of time, while the VR app reduced fixated time and made the users less focused. Results indicate that there is a discrepancy between VR simulations and the AR experience. Our ML models achieve 99.3% and 96.3% accuracy in predicting user attention control ability in AR and VR, respectively. A noticeable performance drop when transferring models trained on one medium to the other further highlights the gap between the AR experience and the VR simulation of it. 
    more » « less
  2. Iris-based biometric authentication is a wide-spread biometric modality due to its accuracy, among other benefits. Improving the resistance of iris biometrics to spoofing attacks is an important research topic. Eye tracking and iris recognition devices have similar hardware that consists of a source of infra-red light and an image sensor. This similarity potentially enables eye tracking algorithms to run on iris-driven biometrics systems. The present work advances the state-of-the-art of detecting iris print attacks, wherein an imposter presents a printout of an authentic user’s iris to a biometrics system. The detection of iris print attacks is accomplished via analysis of the captured eye movement signal with a deep learning model. Results indicate better performance of the selected approach than the previous state-of-the-art. 
    more » « less
  3. Abstract Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow,). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesturedoallocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e.,follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do notmediatethe effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesturemoderatesthe impact of visual looking patterns on learning—following along with speechpredicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. 
    more » « less
  4. Abstract Misinformation exposure can cause inaccurate beliefs and memories. These unwanted outcomes can be mitigated when misinformation reminders—veracity-labeled statements that repeat earlier-read false information—appear before corrections with true information. The present experiment used eye tracking to examine the role of attention while encoding corrective details in the beneficial effects of reminder-based corrections. Participants read headlines in a belief-updating task that included a within-subjects manipulation of correction format. They first rated the familiarity and veracity of true and false headlines (Phase 1). Then, they read true headlines that corrected false headlines or affirmed true headlines (Phase 2). The true headlines appeared (1) without veracity labels, (2) with veracity labels, or (3) with misinformation reminders and veracity labels. Finally, participants re-rated the veracity of the Phase 1 headlines and rated their memory for whether those headlines were corrected in Phase 2 (Phase 3). Reminder-based corrections led to the greatest reduction in false beliefs, best high confidence recognition of corrections, and earliest eye fixations to the true details of corrections during encoding in Phase 2. Corrections remembered with the highest confidence rating were associated with more and earlier fixations to true details in correction statements in Phase 2. Collectively, these results suggest that misinformation reminders directed attention to corrective details, which improved encoding and subsequent memory for veracity information. These results have applied implications in suggesting that optimal correction formats should include features that direct attention to, and thus support encoding of, the contrast between false and true information. 
    more » « less
  5. Abstract Faces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception. 
    more » « less