Humans use language toward hateful ends, inciting violence and genocide, intimidating and denigrating others based on their identity. Despite efforts to better address the language of hate in the public sphere, the psychological processes involved in hateful language remain unclear. In this work, we hypothesize that morality and hate are concomitant in language. In a series of studies, we find evidence in support of this hypothesis using language from a diverse array of contexts, including the use of hateful language in propaganda to inspire genocide (Study 1), hateful slurs as they occur in large text corpora across a multitude of languages (Study 2), and hate speech on social-media platforms (Study 3). In post hoc analyses focusing on particular moral concerns, we found that the type of moral content invoked through hate speech varied by context, with Purity language prominent in hateful propaganda and online hate speech and Loyalty language invoked in hateful slurs across languages. Our findings provide a new psychological lens for understanding hateful language and points to further research into the intersection of morality and hate, with practical implications for mitigating hateful rhetoric online.
Hate speech and hate crimes: a data-driven study of evolving discourse around marginalized groups
- PAR ID:
- 10497875
- Publisher / Repository:
- Proc. 2023 IEEE International Conference on Big Data (BigData)
- Date Published:
- ISBN:
- 979-8-3503-2445-7
- Page Range / eLocation ID:
- 3107 to 3116
- Format(s):
- Medium: X
- Location:
- Sorrento, Italy
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Social stereotypes negatively impact individuals’ judgments about different groups and may have a critical role in understanding language directed toward marginalized groups. Here, we assess the role of social stereotypes in the automated detection of hate speech in the English language by examining the impact of social stereotypes on annotation behaviors, annotated datasets, and hate speech classifiers. Specifically, we first investigate the impact of novice annotators’ stereotypes on their hate-speech-annotation behavior. Then, we examine the effect of normative stereotypes in language on the aggregated annotators’ judgments in a large annotated corpus. Finally, we demonstrate how normative stereotypes embedded in language resources are associated with systematic prediction errors in a hate-speech classifier. The results demonstrate that hate-speech classifiers reflect social stereotypes against marginalized groups, which can perpetuate social inequalities when propagated at scale. This framework, combining social-psychological and computational-linguistic methods, provides insights into sources of bias in hate-speech moderation, informing ongoing debates regarding machine learning fairness.more » « less
-
Recent studies have documented increases in anti-Asian hate throughout the COVID-19 pandemic. Yet relatively little is known about how anti-Asian content on social media, as well as positive messages to combat the hate, have varied over time. In this study, we investigated temporal changes in the frequency of anti-Asian and counter-hate messages on Twitter during the first 16 months of the COVID-19 pandemic. Using the Twitter Data Collection Application Programming Interface, we queried all tweets from January 30, 2020 to April 30, 2021 that contained specific anti-Asian (e.g., #chinavirus, #kungflu) and counter-hate (e.g., #hateisavirus) keywords. From this initial data set, we extracted a random subset of 1,000 Twitter users who had used one or more anti-Asian or counter-hate keywords. For each of these users, we calculated the total number of anti-Asian and counter-hate keywords posted each month. Latent growth curve analysis revealed that the frequency of anti-Asian keywords fluctuated over time in a curvilinear pattern, increasing steadily in the early months and then decreasing in the later months of our data collection. In contrast, the frequency of counter-hate keywords remained low for several months and then increased in a linear manner. Significant between-user variability in both anti-Asian and counter-hate content was observed, highlighting individual differences in the generation of hate and counter-hate messages within our sample. Together, these findings begin to shed light on longitudinal patterns of hate and counter-hate on social media during the COVID-19 pandemic.more » « less