skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


Title: Online hate network spreads malicious COVID-19 content outside the control of individual social media platforms
Abstract

We show that malicious COVID-19 content, including racism, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. We provide a first mapping of the online hate network across six major social media platforms. We demonstrate how malicious content can travel across this network in ways that subvert platform moderation efforts. Machine learning topic analysis shows quantitatively how online hate communities are sharpening COVID-19 as a weapon, with topics evolving rapidly and content becoming increasingly coherent. Based on mathematical modeling, we provide predictions of how changes to content moderation policies can slow the spread of malicious content.

 
more » « less
Award ID(s):
2030694
PAR ID:
10249897
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
11
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social media provide a fertile ground where conspiracy theories and radical ideas can flourish, reach broad audiences, and sometimes lead to hate or violence beyond the online world itself. QAnon represents a notable example of a political conspiracy that started out on social media but turned mainstream, in part due to public endorsement by influential political figures. Nowadays, QAnon conspiracies often appear in the news, are part of political rhetoric, and are espoused by significant swaths of people in the United States. It is therefore crucial to understand how such a conspiracy took root online, and what led so many social media users to adopt its ideas. In this work, we propose a framework that exploits both social interaction and content signals to uncover evidence of user radicalization or support for QAnon. Leveraging a large dataset of 240M tweets collected in the run-up to the 2020 US Presidential election, we define and validate a multivariate metric of radicalization. We use that to separate users in distinct, naturally-emerging, classes of behaviors associated with radicalization processes, from self-declared QAnon supporters to hyper-active conspiracy promoters. We also analyze the impact of Twitter's moderation policies on the interactions among different classes: we discover aspects of moderation that succeed, yielding a substantial reduction in the endorsement received by hyperactive QAnon accounts. But we also uncover where moderation fails, showing how QAnon content amplifiers are not deterred or affected by the Twitter intervention. Our findings refine our understanding of online radicalization processes, reveal effective and ineffective aspects of moderation, and call for the need to further investigate the role social media play in the spread of conspiracies.

     
    more » « less
  2. Prejudice and hate directed toward Asian individuals has increased in prevalence and salience during the COVID-19 pandemic, with notable rises in physical violence. Concurrently, as many governments enacted stay-at-home mandates, the spread of anti-Asian content increased in online spaces, including social media. In the present study, we investigated temporal and geographical patterns in social media content relevant to anti-Asian prejudice during the COVID-19 pandemic. Using the Twitter Data Collection API, we queried over 13 million tweets posted between January 30, 2020, and April 30, 2021, for both negative (e.g., #kungflu) and positive (e.g., #stopAAPIhate) hashtags and keywords related to anti-Asian prejudice. In a series of descriptive analyses, we found differences in the frequency of negative and positive keywords based on geographic location. Using burst detection, we also identified distinct increases in negative and positive content in relation to key political tweets and events. These largely exploratory analyses shed light on the role of social media in the expression and proliferation of prejudice as well as positive responses online. 
    more » « less
  3. People are increasingly exposed to science and political information from social media. One consequence is that these sites play host to “alternative influencers,” who spread misinformation. However, content posted by alternative influencers on different social media platforms is unlikely to be homogenous. Our study uses computational methods to investigate how dimensions we refer to as audience and channel of social media platforms influence emotion and topics in content posted by “alternative influencers” on different platforms. Using COVID-19 as an example, we find that alternative influencers’ content contained more anger and fear words on Facebook and Twitter compared to YouTube. We also found that these actors discussed substantively different topics in their COVID-19 content on YouTube compared to Twitter and Facebook. With these findings, we discuss how the audience and channel of different social media platforms affect alternative influencers’ ability to spread misinformation online. 
    more » « less
  4. Online harassment and content moderation have been well-documented in online communities. However, new contexts and systems always bring new ways of harassment and need new moderation mechanisms. This study focuses on hate raids, a form of group attack in real-time in live streaming communities. Through a qualitative analysis of hate raids discussion in the Twitch subreddit (r/Twitch), we found that (1) hate raids as a human-bot coordinated group attack leverages the live stream system to attack marginalized streamers and other potential groups with(out) breaking the rules, (2) marginalized streamers suffer compound harms with insufficient support from the platform, (3) moderation strategies are overwhelmingly technical, but streamers still struggle to balance moderation and participation considering their marginalization status and needs. We use affordances as a lens to explain how hate raids happens in live streaming systems and propose moderation-by-design as a lens when developing new features or systems to mitigate the potential abuse of such designs.

     
    more » « less
  5. On October 27th, 2022, Elon Musk purchased Twitter, becoming its new CEO and firing many top executives in the process. Musk listed fewer restrictions on content moderation and removal of spam bots among his goals for the platform. Given findings of prior research on moderation and hate speech in online communities, the promise of less strict content moderation poses the concern that hate will rise on Twitter. We examine the levels of hate speech and prevalence of bots before and after Musk's acquisition of the platform. We find that hate speech rose dramatically upon Musk purchasing Twitter and the prevalence of most types of bots increased, while the prevalence of astroturf bots decreased.

     
    more » « less