skip to main content


Title: Dynamics of neural representations when searching for exemplars and categories of human and non-human faces
Abstract

Face perception abilities in humans exhibit a marked expertise in distinguishing individual human faces at the expense of individual faces from other species (the other-species effect). In particular, one behavioural effect of such specialization is that human adults search for and find categories of non-human faces faster and more accurately than a specific non-human face, and vice versa for human faces. However, a recent visual search study showed that neural responses (event-related potentials, ERPs) were identical when finding either a non-human or human face. We used time-resolved multivariate pattern analysis of the EEG data from that study to investigate the dynamics of neural representations during a visual search for own-species (human) or other-species (non-human ape) faces, with greater sensitivity than traditional ERP analyses. The location of each target (i.e., right or left) could be decoded from the EEG, with similar accuracy for human and non-human faces. However, the neural patterns associated with searching for an exemplar versus a category target differed for human faces compared to non-human faces: Exemplar representations could be more reliably distinguished from category representations for human than non-human faces. These findings suggest that the other-species effect modulates the nature of representations, but preserves the attentional selection of target items based on these representations.

 
more » « less
NSF-PAR ID:
10153701
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
8
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills—attending to faces and attending to the eyes—surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2‐, 4‐, and 6‐month‐olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following—the ability to look where another individual looks—which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.

     
    more » « less
  2. Abstract

    The human visual cortex is organized in a hierarchical manner. Although previous evidence supporting this hypothesis has been accumulated, specific details regarding the spatiotemporal information flow remain open. Here we present detailed spatiotemporal correlation profiles of neural activity with low‐level and high‐level features derived from an eight‐layer neural network pretrained for object recognition. These correlation profiles indicate an early‐to‐late shift from low‐level features to high‐level features and from low‐level regions to higher‐level regions along the visual hierarchy, consistent with feedforward information flow. Additionally, we computed three sets of features from the low‐ and high‐level features provided by the neural network: object‐category‐relevant low‐level features (the common components between low‐level and high‐level features), low‐level features roughly orthogonal to high‐level features (the residual Layer 1 features), and unique high‐level features that were roughly orthogonal to low‐level features (the residual Layer 7 features). Contrasting the correlation effects of the common components and the residual Layer 1 features, we observed that the early visual cortex (EVC) exhibited a similar amount of correlation with the two feature sets early in time, but in a later time window, the EVC exhibited a higher and longer correlation effect with the common components (i.e., the low‐level object‐category‐relevant features) than with the low‐level residual features—an effect unlikely to arise from purely feedforward information flow. Overall, our results indicate that non‐feedforward processes, for example, top‐down influences from mental representations of categories, may facilitate differentiation between these two types of low‐level features within the EVC.

     
    more » « less
  3. Abstract

    Most adults are better at recognizing recently encountered faces of their own race, relative to faces of other races. In adults, this race effect in face recognition is associated with differential neural representations of own‐ and other‐race faces in the fusiform face area (FFA), a high‐level visual region involved in face recognition. Previous research has linked these differential face representations in adults to viewers’ implicit racial associations. However, despite the fact that the FFA undergoes a gradual development which continues well into adulthood, little is known about the developmental time‐course of the race effect in FFA responses. Also unclear is how this race effect might relate to the development of face recognition or implicit associations with own‐ or other‐races during childhood and adolescence. To examine the developmental trajectory of these race effects, in a cross‐sectional study of European American (EA) children (ages 7–11), adolescents (ages 12–16) and adults (ages 18–35), we evaluated responses to adult African American (AA) and EA face stimuli, using functional magnetic resonance imaging and separate behavioral measures outside the scanner. We found that FFA responses to AA and EA faces differentiated during development from childhood into adulthood; meanwhile, the magnitudes of race effects increased in behavioral measures of face‐recognition and implicit racial associations. These three race effects were positively correlated, even after controlling for age. These findings suggest that social and perceptual experiences shape a protracted development of the race effect in face processing that continues well into adulthood.

     
    more » « less
  4. According to a classical view of face perception (Bruce and Young, 1986; Haxby et al., 2000), face identity and facial expression recognition are performed by separate neural substrates (ventral and lateral temporal face-selective regions, respectively). However, recent studies challenge this view, showing that expression valence can also be decoded from ventral regions (Skerry and Saxe, 2014; Li et al., 2019), and identity from lateral regions (Anzellotti and Caramazza, 2017). These findings could be reconciled with the classical view if regions specialized for one task (either identity or expression) contain a small amount of information for the other task (that enables above-chance decoding). In this case, we would expect representations in lateral regions to be more similar to representations in deep convolutional neural networks (DCNNs) trained to recognize facial expression than to representations in DCNNs trained to recognize face identity (the converse should hold for ventral regions). We tested this hypothesis by analyzing neural responses to faces varying in identity and expression. Representational dissimilarity matrices (RDMs) computed from human intracranial recordings (n= 11 adults; 7 females) were compared with RDMs from DCNNs trained to label either identity or expression. We found that RDMs from DCNNs trained to recognize identity correlated with intracranial recordings more strongly in all regions tested—even in regions classically hypothesized to be specialized for expression. These results deviate from the classical view, suggesting that face-selective ventral and lateral regions contribute to the representation of both identity and expression.

    SIGNIFICANCE STATEMENTPrevious work proposed that separate brain regions are specialized for the recognition of face identity and facial expression. However, identity and expression recognition mechanisms might share common brain regions instead. We tested these alternatives using deep neural networks and intracranial recordings from face-selective brain regions. Deep neural networks trained to recognize identity and networks trained to recognize expression learned representations that correlate with neural recordings. Identity-trained representations correlated with intracranial recordings more strongly in all regions tested, including regions hypothesized to be expression specialized in the classical hypothesis. These findings support the view that identity and expression recognition rely on common brain regions. This discovery may require reevaluation of the roles that the ventral and lateral neural pathways play in processing socially relevant stimuli.

     
    more » « less
  5. Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features. 
    more » « less