Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.
- Award ID(s):
- 1945230
- NSF-PAR ID:
- 10212115
- Date Published:
- Journal Name:
- Cerebral Cortex Communications
- Volume:
- 1
- Issue:
- 1
- ISSN:
- 2632-7376
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Faces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception.
-
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.more » « less
-
From a glimpse of a face, people form trait impressions that operate as facial stereotypes, which are largely inaccurate yet nevertheless drive social behavior. Behavioral studies have long pointed to dimensions of trustworthiness and dominance that are thought to underlie face impressions due to their evolutionarily adaptive nature. Using human neuroimaging (
N = 26, 19 female, 7 male), we identify a two-dimensional representation of faces’ inferred traits in the middle temporal gyrus (MTG), a region involved in domain-general conceptual processing including the activation of social concepts. The similarity of neural-response patterns for any given pair of faces in the bilateral MTG was predicted by their proximity in trustworthiness–dominance space, an effect that could not be explained by mere visual similarity. This MTG trait-space representation occurred automatically, was relatively invariant across participants, and did not depend on the explicit endorsement of face impressions (i.e., beliefs that face impressions are valid and accurate). In contrast, regions involved in high-level social reasoning (the bilateral temporoparietal junction and posterior superior temporal sulcus; TPJ–pSTS) and entity-specific social knowledge (the left anterior temporal lobe; ATL) also exhibited this trait-space representation but only among participants who explicitly endorsed forming these impressions. Together, the findings identify a two-dimensional neural representation of face impressions and suggest that multiple implicit and explicit mechanisms give rise to biases based on facial appearance. While the MTG implicitly represents a multidimensional trait space for faces, the TPJ–pSTS and ATL are involved in the explicit application of this trait space for social evaluation and behavior. -
Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.more » « less