skip to main content


Title: The COVID-19 Infodemic: Twitter versus Facebook
The global spread of the novel coronavirus is affected by the spread of related misinformation—the so-called COVID-19 Infodemic—that makes populations more vulnerable to the disease through resistance to mitigation efforts. Here, we analyze the prevalence and diffusion of links to low-credibility content about the pandemic across two major social media platforms, Twitter and Facebook. We characterize cross-platform similarities and differences in popular sources, diffusion patterns, influencers, coordination, and automation. Comparing the two platforms, we find divergence among the prevalence of popular low-credibility sources and suspicious videos. A minority of accounts and pages exert a strong influence on each platform. These misinformation “superspreaders” are often associated with the low-credibility sources and tend to be verified by the platforms. On both platforms, there is evidence of coordinated sharing of Infodemic content. The overt nature of this manipulation points to the need for societal-level solutions in addition to mitigation strategies within the platforms. However, we highlight limits imposed by inconsistent data-access policies on our capability to study harmful manipulations of information ecosystems.  more » « less
Award ID(s):
1735095
NSF-PAR ID:
10289336
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Big Data & Society
Volume:
8
Issue:
1
ISSN:
2053-9517
Page Range / eLocation ID:
205395172110138
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Guidi, Barbara (Ed.)
    The COVID-19 pandemic brought widespread attention to an “infodemic” of potential health misinformation. This claim has not been assessed based on evidence. We evaluated if health misinformation became more common during the pandemic. We gathered about 325 million posts sharing URLs from Twitter and Facebook during the beginning of the pandemic (March 8-May 1, 2020) compared to the same period in 2019. We relied on source credibility as an accepted proxy for misinformation across this database. Human annotators also coded a subsample of 3000 posts with URLs for misinformation. Posts about COVID-19 were 0.37 times as likely to link to “not credible” sources and 1.13 times more likely to link to “more credible” sources than prior to the pandemic. Posts linking to “not credible” sources were 3.67 times more likely to include misinformation compared to posts from “more credible” sources. Thus, during the earliest stages of the pandemic, when claims of an infodemic emerged, social media contained proportionally less misinformation than expected based on the prior year. Our results suggest that widespread health misinformation is not unique to COVID-19. Rather, it is a systemic feature of online health communication that can adversely impact public health behaviors and must therefore be addressed. 
    more » « less
  2. With the spread of the SARS-CoV-2, enormous amounts of information about the pandemic are disseminated through social media platforms such as Twitter. Social media posts often leverage the trust readers have in prestigious news agencies and cite news articles as a way of gaining credibility. Nevertheless, it is not always the case that the cited article supports the claim made in the social media post. We present a cross-genre ad hoc pipeline to identify whether the information in a Twitter post (i.e., a “Tweet”) is indeed supported by the cited news article. Our approach is empirically based on a corpus of over 46.86 million Tweets and is divided into two tasks: (i) development of models to detect Tweets containing claim and worth to be fact-checked and (ii) verifying whether the claims made in a Tweet are supported by the newswire article it cites. Unlike previous studies that detect unsubstantiated information by post hoc analysis of the patterns of propagation, we seek to identify reliable support (or the lack of it) before the misinformation begins to spread. We discover that nearly half of the Tweets (43.4%) are not factual and hence not worth checking – a significant filter, given the sheer volume of social media posts on a platform such as Twitter. Moreover, we find that among the Tweets that contain a seemingly factual claim while citing a news article as supporting evidence, at least 1% are not actually supported by the cited news, and are hence misleading. 
    more » « less
  3. This paper introduces and presents a first analysis of a uniquely curated dataset of misinformation, disinformation, and rumors spreading on Twitter about the 2020 U.S. election. Previous research on misinformation—an umbrella term for false and misleading content—has largely focused either on broad categories, using a finite set of keywords to cover a complex topic, or on a few, focused case studies, with increased precision but limited scope. Our approach, by comparison, leverages real-time reports collected from September through November 2020 to develop a comprehensive dataset of tweets connected to 456 distinct misinformation stories from the 2020 U.S. election (our ElectionMisinfo2020 dataset), 307 of which sowed doubt in the legitimacy of the election. By relying on real-time incidents and streaming data, we generate a curated dataset that not only provides more granularity than a large collection based on a finite number of search terms, but also an improved opportunity for generalization compared to a small set of case studies. Though the emphasis is on misleading content, not all of the tweets linked to a misinformation story are false: some are questions, opinions, corrections, or factual content that nonetheless contributes to misperceptions. Along with a detailed description of the data, this paper provides an analysis of a critical subset of election-delegitimizing misinformation in terms of size, content, temporal diffusion, and partisanship. We label key ideological clusters of accounts within interaction networks, describe common misinformation narratives, and identify those accounts which repeatedly spread misinformation. We document the asymmetry of misinformation spread: accounts associated with support for President Biden shared stories in ElectionMisinfo2020 far less than accounts supporting his opponent. That asymmetry remained among the accounts who were repeatedly influential in the spread of misleading content that sowed doubt in the election: all but two of the top 100 ‘repeat spreader’ accounts were supporters of then-President Trump. These findings support the implementation and enforcement of ‘strike rules’ on social media platforms, directly addressing the outsized role of repeat spreaders. 
    more » « less
  4. Well-intentioned users sometimes enable the spread of misinformation due to limited context about where the information originated and/or why it is spreading. Building upon recommendations based on prior research about tackling misinformation, we explore the potential to support media literacy through platform design. We develop and design an intervention consisting of a tweet trajectory-to illustrate how information reached a user-and contextual cues-to make credibility judgments about accounts that amplify, manufacture, produce, or situate in the vicinity of problematic content (AMPS). Using a research through design approach, we demonstrate how the proposed intervention can help discern credible actors, challenge blind faith amongst online friends, evaluate the cost of associating with online actors, and expose hidden agendas. Such facilitation of credibility assessment can encourage more responsible sharing of content. Through our findings, we argue for using trajectory-based designs to support informed information sharing, advocate for feature updates that nudge users with reflective cues, and promote platform-driven media literacy. 
    more » « less
  5. As of March 2021, the SARS-CoV-2 virus has been responsible for over 115 million cases of COVID-19 worldwide, resulting in over 2.5 million deaths. As the virus spread exponentially, so did its media coverage, resulting in a proliferation of conflicting information on social media platforms—a so-called “infodemic.” In this viewpoint, we survey past literature investigating the role of automated accounts, or “bots,” in spreading such misinformation, drawing connections to the COVID-19 pandemic. We also review strategies used by bots to spread (mis)information and examine the potential origins of bots. We conclude by conducting and presenting a secondary analysis of data sets of known bots in which we find that up to 66% of bots are discussing COVID-19. The proliferation of COVID-19 (mis)information by bots, coupled with human susceptibility to believing and sharing misinformation, may well impact the course of the pandemic. 
    more » « less