Individuals with autism spectrum disorder (ASD) experience pervasive difficulties in processing social information from faces. However, the behavioral and neural mechanisms underlying social trait judgments of faces in ASD remain largely unclear. Here, we comprehensively addressed this question by employing functional neuroimaging and parametrically generated faces that vary in facial trustworthiness and dominance. Behaviorally, participants with ASD exhibited reduced specificity but increased inter-rater variability in social trait judgments. Neurally, participants with ASD showed hypo-activation across broad face-processing areas. Multivariate analysis based on trial-by-trial face responses could discriminate participant groups in the majority of the face-processing areas. Encoding social traits in ASD engaged vastly different face-processing areas compared to controls, and encoding different social traits engaged different brain areas. Interestingly, the idiosyncratic brain areas encoding social traits in ASD were still flexible and context-dependent, similar to neurotypicals. Additionally, participants with ASD also showed an altered encoding of facial saliency features in the eyes and mouth. Together, our results provide a comprehensive understanding of the neural mechanisms underlying social trait judgments in ASD.
Autism spectrum disorder (ASD) is characterized by difficulties in social processes, interactions, and communication. Yet, the neurocognitive bases underlying these difficulties are unclear. Here, we triangulated the ‘trans-diagnostic’ approach to personality, social trait judgments of faces, and neurophysiology to investigate (1) the relative position of autistic traits in a comprehensive social-affective personality space, and (2) the distinct associations between the social-affective personality dimensions and social trait judgment from faces in individuals with ASD and neurotypical individuals. We collected personality and facial judgment data from a large sample of online participants (
- NSF-PAR ID:
- 10364016
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Translational Psychiatry
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2158-3188
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Processing social information from faces is difficult for individuals with autism spectrum disorder (ASD). However, it remains unclear whether individuals with ASD make high-level social trait judgments from faces in the same way as neurotypical individuals. Here, we comprehensively addressed this question using naturalistic face images and representatively sampled traits. Despite similar underlying dimensional structures across traits, online adult participants with self-reported ASD showed different judgments and reduced specificity within each trait compared with neurotypical individuals. Deep neural networks revealed that these group differences were driven by specific types of faces and differential utilization of features within a face. Our results were replicated in well-characterized in-lab participants and partially generalized to more controlled face images (a preregistered study). By investigating social trait judgments in a broader population, including individuals with neurodevelopmental variations, we found important theoretical implications for the fundamental dimensions, variations, and potential behavioral consequences of social cognition.
-
null (Ed.)Abstract People spontaneously infer other people’s psychology from faces, encompassing inferences of their affective states, cognitive states, and stable traits such as personality. These judgments are known to be often invalid, but nonetheless bias many social decisions. Their importance and ubiquity have made them popular targets for automated prediction using deep convolutional neural networks (DCNNs). Here, we investigated the applicability of this approach: how well does it generalize, and what biases does it introduce? We compared three distinct sets of features (from a face identification DCNN, an object recognition DCNN, and using facial geometry), and tested their prediction across multiple out-of-sample datasets. Across judgments and datasets, features from both pre-trained DCNNs provided better predictions than did facial geometry. However, predictions using object recognition DCNN features were not robust to superficial cues (e.g., color and hair style). Importantly, predictions using face identification DCNN features were not specific: models trained to predict one social judgment (e.g., trustworthiness) also significantly predicted other social judgments (e.g., femininity and criminal), and at an even higher accuracy in some cases than predicting the judgment of interest (e.g., trustworthiness). Models trained to predict affective states (e.g., happy) also significantly predicted judgments of stable traits (e.g., sociable), and vice versa. Our analysis pipeline not only provides a flexible and efficient framework for predicting affective and social judgments from faces but also highlights the dangers of such automated predictions: correlated but unintended judgments can drive the predictions of the intended judgments.more » « less
-
From a glimpse of a face, people form trait impressions that operate as “facial stereotypes”, which are largely inaccurate yet nevertheless drive social behavior. Behavioral studies have long pointed to dimensions of trustworthiness and dominance that are thought to underlie face impressions due to their evolutionarily adaptive nature. Using human neuroimaging (
N =26, 19 female, 7 male), we identify a two-dimensional representation of faces' inferred traits in the middle temporal gyrus (MTG), a region involved in domain-general conceptual processing including the activation of social concepts. The similarity of neural-response patterns for any given pair of faces in the bilateral MTG was predicted by their proximity in trustworthiness-dominance space, an effect that could not be explained by mere visual similarity. This MTG trait-space representation occurred automatically, was relatively invariant across participants, and did not depend on the explicit endorsement of face impressions (i.e., beliefs that face impressions are valid and accurate). In contrast, regions involved in high-level social reasoning (the bilateral temporoparietal junction and posterior superior temporal sulcus; TPJ-pSTS) and entity-specific social knowledge (the left anterior temporal lobe; ATL), also exhibited this trait-space representation, but only among participants who explicitly endorsed forming these impressions. Together, the findings identify a two-dimensional neural representation of face impressions and suggest that multiple implicit and explicit mechanisms give rise to biases based on facial appearance. While the MTG implicitly represents a multidimensional trait space for faces, the TPJ-pSTS and ATL are involved in the explicit application of this trait space for social evaluation and behavior.Significance Statement People form trait impressions based on facial features, which operate like facial stereotypes that bias social decision-making and shape real-world outcomes in career, legal, and political domains. We show that the brain represents others' faces as points in a two-dimensional space tracking trustworthiness and dominance. One brain region, involved in the activation of conceptual attributes, automatically represented this trait space in response to faces, even if participants did not explicitly endorse the use of facial stereotyping. Other regions, involved in the social reasoning of others' internal qualities, also represented this trait space, but only among participants who endorsed facial stereotyping. The findings reveal how harmful biases based on facial appearance arise in the brain through multiple implicit and explicit mechanisms. -
We examined whether, even at zero acquaintance, observers accurately infer others’ social network positions—specifically, the number and patterning of social ties (e.g., brokerage—the extent to which a person bridges disconnected people) and the trait impressions that support this accuracy. We paired social network data ( n = 272 professional school students), with naive observers’ ( n = 301 undergraduates) judgments of facial images of each person within the network. Results revealed that observers’ judgments of targets’ number of friends were predicted by the actual number of people who considered the target a friend (in-degree centrality) and that perceived brokerage was significantly predicted by targets’ actual brokerage. Lens models revealed that targets’ perceived attractiveness, dominance, warmth, competence, and trustworthiness supported this accuracy, with attractiveness and warmth most associated with perceptions of popularity and brokerage. Overall, we demonstrate accuracy in naive observers’ judgments of social network position and the trait impressions supporting these inferences.