Processing social information from faces is difficult for individuals with autism spectrum disorder (ASD). However, it remains unclear whether individuals with ASD make high-level social trait judgments from faces in the same way as neurotypical individuals. Here, we comprehensively addressed this question using naturalistic face images and representatively sampled traits. Despite similar underlying dimensional structures across traits, online adult participants with self-reported ASD showed different judgments and reduced specificity within each trait compared with neurotypical individuals. Deep neural networks revealed that these group differences were driven by specific types of faces and differential utilization of features within a face. Our results were replicated in well-characterized in-lab participants and partially generalized to more controlled face images (a preregistered study). By investigating social trait judgments in a broader population, including individuals with neurodevelopmental variations, we found important theoretical implications for the fundamental dimensions, variations, and potential behavioral consequences of social cognition.
more »
« less
Shared neural codes for visual and semantic information about familiar faces in a common representational space
Processes evoked by seeing a personally familiar face encompass recognition of visual appearance and activation of social and person knowledge. Whereas visual appearance is the same for all viewers, social and person knowledge may be more idiosyncratic. Using between-subject multivariate decoding of hyperaligned functional magnetic resonance imaging data, we investigated whether representations of personally familiar faces in different parts of the distributed neural system for face perception are shared across individuals who know the same people. We found that the identities of both personally familiar and merely visually familiar faces were decoded accurately across brains in the core system for visual processing, but only the identities of personally familiar faces could be decoded across brains in the extended system for processing nonvisual information associated with faces. Our results show that personal interactions with the same individuals lead to shared neural representations of both the seen and unseen features that distinguish their identities.
more »
« less
- Award ID(s):
- 1835200
- PAR ID:
- 10350177
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences
- Volume:
- 118
- Issue:
- 45
- ISSN:
- 0027-8424
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Deep convolutional neural networks (DCNNs) trained for face identification can rival and even exceed human-level performance. The ways in which the internal face representations in DCNNs relate to human cognitive representations and brain activity are not well understood. Nearly all previous studies focused on static face image processing with rapid display times and ignored the processing of naturalistic, dynamic information. To address this gap, we developed the largest naturalistic dynamic face stimulus set in human neuroimaging research (700+ naturalistic video clips of unfamiliar faces). We used this naturalistic dataset to compare representational geometries estimated from DCNNs, behavioral responses, and brain responses. We found that DCNN representational geometries were consistent across architectures, cognitive representational geometries were consistent across raters in a behavioral arrangement task, and neural representational geometries in face areas were consistent across brains. Representational geometries in late, fully connected DCNN layers, which are optimized for individuation, were much more weakly correlated with cognitive and neural geometries than were geometries in late-intermediate layers. The late-intermediate face-DCNN layers successfully matched cognitive representational geometries, as measured with a behavioral arrangement task that primarily reflected categorical attributes, and correlated with neural representational geometries in known face-selective topographies. Our study suggests that current DCNNs successfully capture neural cognitive processes for categorical attributes of faces but less accurately capture individuation and dynamic features.more » « less
-
From a glimpse of a face, people form trait impressions that operate as facial stereotypes, which are largely inaccurate yet nevertheless drive social behavior. Behavioral studies have long pointed to dimensions of trustworthiness and dominance that are thought to underlie face impressions due to their evolutionarily adaptive nature. Using human neuroimaging (N = 26, 19 female, 7 male), we identify a two-dimensional representation of faces’ inferred traits in the middle temporal gyrus (MTG), a region involved in domain-general conceptual processing including the activation of social concepts. The similarity of neural-response patterns for any given pair of faces in the bilateral MTG was predicted by their proximity in trustworthiness–dominance space, an effect that could not be explained by mere visual similarity. This MTG trait-space representation occurred automatically, was relatively invariant across participants, and did not depend on the explicit endorsement of face impressions (i.e., beliefs that face impressions are valid and accurate). In contrast, regions involved in high-level social reasoning (the bilateral temporoparietal junction and posterior superior temporal sulcus; TPJ–pSTS) and entity-specific social knowledge (the left anterior temporal lobe; ATL) also exhibited this trait-space representation but only among participants who explicitly endorsed forming these impressions. Together, the findings identify a two-dimensional neural representation of face impressions and suggest that multiple implicit and explicit mechanisms give rise to biases based on facial appearance. While the MTG implicitly represents a multidimensional trait space for faces, the TPJ–pSTS and ATL are involved in the explicit application of this trait space for social evaluation and behavior.more » « less
-
Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.more » « less
-
null (Ed.)Abstract An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.more » « less
An official website of the United States government

