skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Stereotyping at the intersection of race and weight: Diluted threat stereotyping of obese Black men
Award ID(s):
1748461
PAR ID:
10396092
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Experimental Social Psychology
Volume:
99
Issue:
C
ISSN:
0022-1031
Page Range / eLocation ID:
104274
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. When seeing a face, people form judgments of perceptually ambiguous social categories (PASCs), for example, gun-owners, gay people, or alcoholics. Previous research has assumed that PASC judgments arise from the statistical learning of facial features in social encounters. We propose, instead, that perceivers associate facial features with traits (e.g., extroverted) and then infer PASC membership via learned stereotype associations with those traits. Across three studies, we show that when any PASC is more stereotypically associated with a trait (e.g., alcoholics = extroverted), perceivers are more likely to infer PASC membership from faces conveying that trait (Study 1). Furthermore, we demonstrate that individual differences in trait–PASC stereotypes predict face-based judgments of PASC membership (Study 2) and have a causal role in these judgments (Study 3). Together, our findings imply that people can form any number of PASC judgments from facial appearance alone by drawing on their learned social–conceptual associations. 
    more » « less
  2. The overarching goal of this research project is to provide a novel contribution to perceived bias research by testing the hypothesis that mere exposure to instances of subtle gender bias in STEM settings can have important effects on observers, depending on whether they recognize such events as gender bias or do not see it as bias. The goal of the first of five experiments was to assess how witnessing subtle gender bias events influences explicit stereotype activation among people who recognize the events as gender stereotyping as well as those who do not. We utilized video materials that were developed and tested in our previous NSF research that show a group of four engineering students, 2 women and 2 men, working together on an engineering design task. There are two versions of the video: one in which the students engaged in subtle gender bias (bias version), and one in which the students engaged in neutral interactions (control version). Over 400 participants were recruited from a large midwestern research university from computer science and engineering majors in which 30% or fewer majors are women. The survey included assessments of perceptions of gender stereotyping in the video, general stereotype endorsement and STEM stereotype endorsement, and three individual difference measures (gender-based rejection sensitivity, sexism sensitivity and negative emotionality) used as covariates in analyses. We found that participants who saw the bias video reported greater explicit stereotyping when they failed to recognize gender bias in the video. When they did recognize bias, they reported explicit stereotyping at levels similar to those in the control condition. This pattern suggests that exposure to subtle gender bias events may have activated gender stereotypes, but when participants recognized the events as gender bias, they tempered their explicit stereotyping. 
    more » « less
  3. The overarching goal of this research project is to provide a novel contribution to perceived bias research by testing the hypothesis that mere exposure to instances of subtle gender bias in STEM settings can have important effects on observers, depending on whether they recognize such events as gender bias or do not see it as bias. The goal of the first of five experiments was to assess how witnessing subtle gender bias events influences explicit stereotype activation among people who recognize the events as gender stereotyping as well as those who do not. We utilized video materials that were developed and tested in our previous NSF research that show a group of four engineering students, 2 women and 2 men, working together on an engineering design task. There are two versions of the video: one in which the students engaged in subtle gender bias (bias version), and one in which the students engaged in neutral interactions (control version). Over 400 participants were recruited from a large midwestern research university from computer science and engineering majors in which 30% or fewer majors are women. The survey included assessments of perceptions of gender stereotyping in the video, general stereotype endorsement and STEM stereotype endorsement, and three individual difference measures (gender-based rejection sensitivity, sexism sensitivity and negative emotionality) used as covariates in analyses. We found that participants who saw the bias video reported greater explicit stereotyping when they failed to recognize gender bias in the video. When they did recognize bias, they reported explicit stereotyping at levels similar to those in the control condition. This pattern suggests that exposure to subtle gender bias events may have activated gender stereotypes, but when participants recognized the events as gender bias, they tempered their explicit stereotyping. 
    more » « less
  4. While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets. 
    more » « less
  5. While harms of allocation have been increasingly studied as part of the subfield of algorithmic fairness, harms of representation have received considerably less attention. In this paper, we formalize two notions of stereotyping and show how they manifest in later allocative harms within the machine learning pipeline. We also propose mitigation strategies and demonstrate their effectiveness on synthetic datasets. 
    more » « less