skip to main content


Search for: All records

Award ID contains: 1846531

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Understanding motivations underlying acts of hatred are essential for developing strategies to prevent such extreme behavioral expressions of prejudice (EBEPs) against marginalized groups. In this work, we investigate the motivations underlying EBEPs as a function of moral values. Specifically, we propose EBEPs may often be best understood as morally motivated behaviors grounded in people’s moral values and perceptions of moral violations. As evidence, we report five studies that integrate spatial modeling and experimental methods to investigate the relationship between moral values and EBEPs. Our results, from these U.S. based studies, suggest that moral values oriented around group preservation are predictive of the county-level prevalence of hate groups and associated with the belief that extreme behavioral expressions of prejudice against marginalized groups are justified. Additional analyses suggest that the association between group-based moral values and EBEPs against outgroups can be partly explained by the belief that these groups have done something morally wrong.

     
    more » « less
  2. Abstract Social stereotypes negatively impact individuals’ judgments about different groups and may have a critical role in understanding language directed toward marginalized groups. Here, we assess the role of social stereotypes in the automated detection of hate speech in the English language by examining the impact of social stereotypes on annotation behaviors, annotated datasets, and hate speech classifiers. Specifically, we first investigate the impact of novice annotators’ stereotypes on their hate-speech-annotation behavior. Then, we examine the effect of normative stereotypes in language on the aggregated annotators’ judgments in a large annotated corpus. Finally, we demonstrate how normative stereotypes embedded in language resources are associated with systematic prediction errors in a hate-speech classifier. The results demonstrate that hate-speech classifiers reflect social stereotypes against marginalized groups, which can perpetuate social inequalities when propagated at scale. This framework, combining social-psychological and computational-linguistic methods, provides insights into sources of bias in hate-speech moderation, informing ongoing debates regarding machine learning fairness. 
    more » « less
  3. Online radicalization is among the most vexing challenges the world faces today. Here, we demonstrate that homogeneity in moral concerns results in increased levels of radical intentions. In Study 1, we find that in Gab—a right-wing extremist network—the degree of moral convergence within a cluster predicts the number of hate-speech messages members post. In Study 2, we replicate this observation in another extremist network, Incels. In Studies 3 to 5 ( N = 1,431), we demonstrate that experimentally leading people to believe that others in their hypothetical or real group share their moral views increases their radical intentions as well as willingness to fight and die for the group. Our findings highlight the role of moral convergence in radicalization, emphasizing the need for diversity of moral worldviews within social networks. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)
    Abstraction in language has critical implications for memory, judgment, and learning and can provide an important window into a person’s cognitive abstraction level. The linguistic category model (LCM) provides one well-validated, human-coded approach to quantifying linguistic abstraction. In this article, we leverage the LCM to construct the Syntax-LCM, a computer-automated method which quantifies syntax use that indicates abstraction levels. We test the Syntax-LCM’s accuracy for approximating hand-coded LCM scores and validate that it differentiates between text intended for a distal or proximal message recipient (previously linked with shifts in abstraction). We also consider existing automated methods for quantifying linguistic abstraction and find that the Syntax-LCM most consistently approximates LCM scores across contexts. We discuss practical and theoretical implications of these findings. 
    more » « less
  6. Research has shown that accounting for moral sentiment in natural language can yield insight into a variety of on- and off-line phenomena such as message diffusion, protest dynamics, and social distancing. However, measuring moral sentiment in natural language is challenging, and the difficulty of this task is exacerbated by the limited availability of annotated data. To address this issue, we introduce the Moral Foundations Twitter Corpus, a collection of 35,108 tweets that have been curated from seven distinct domains of discourse and hand annotated by at least three trained annotators for 10 categories of moral sentiment. To facilitate investigations of annotator response dynamics, we also provide psychological and demographic metadata for each annotator. Finally, we report moral sentiment classification baselines for this corpus using a range of popular methodologies. 
    more » « less