skip to main content


Search for: All records

Award ID contains: 2120496

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Claims of election fraud throughout the 2020 U.S. Presidential Election and during the lead up to the January 6, 2021 insurrection attempt have drawn attention to the urgent need to better understand how people interpret and act on disinformation. In this work, we present three primary contributions: (1) a framework for understanding the interaction between participatory disinformation and informal and tactical mobilization; (2) three case studies from the 2020 U.S. election analyzed using detailed temporal, content, and thematic analysis; and (3) a qualitative coding scheme for understanding how digital disinformation functions to mobilize online audiences. We combine resource mobilization theory with previous work examining participatory disinformation campaigns and "deep stories" to show how false or misleading information functioned to mobilize online audiences before, during, and after election day. Our analysis highlights how users on Twitter collaboratively construct and amplify alleged evidence of fraud that is used to facilitate action, both online and off. We find that mobilization is dependent on the selective amplification of false or misleading tweets by influencers, the framing around those claims, as well as the perceived credibility of their source. These processes are a self-reinforcing cycle where audiences collaborate in the construction of a misleading version of reality, which in turn leads to offline actions that are used to further reinforce a manufactured reality. Through this work, we hope to better inform future interventions. 
    more » « less
  2. The 2020 US election was accompanied by an effort to spread a false meta-narrative of widespread voter fraud. This meta-narrative took hold among a substantial portion of the US population, undermining trust in election procedures and results, and eventually motivating the events of 6 January 2021. We examine this effort as a domestic and participatory disinformation campaign in which a variety of influencers—including hyperpartisan media and political operatives—worked alongside ordinary people to produce and amplify misleading claims, often unwittingly. To better understand the nature of participatory disinformation, we examine three cases of misleading claims of voter fraud, applying an interpretive, mixed method approach to the analysis of social media data. Contrary to a prevailing view of such campaigns as coordinated and/or elite-driven efforts, this work reveals a more hybrid form, demonstrating both top-down and bottom-up dynamics that are more akin to cultivation and improvisation. 
    more » « less
  3. Decades-old research about how and why people share rumors is even more relevant in a world with social media. 
    more » « less
  4. The prevalence and spread of online misinformation during the 2020 US presidential election served to perpetuate a false belief in widespread election fraud. Though much research has focused on how social media platforms connected people to election-related rumors and conspiracy theories, less is known about the search engine pathways that linked users to news content with the potential to undermine trust in elections. In this paper, we present novel data related to the content of political headlines during the 2020 US election period. We scraped over 800,000 headlines from Google's search engine results pages (SERP) in response to 20 election-related keywords—10 general (e.g., "Ballots") and 10 conspiratorial (e.g., "Voter fraud")—when searched from 20 cities across 16 states. We present results from qualitative coding of 5,600 headlines focused on the prevalence of delegitimizing information. Our results reveal that videos (as compared to stories, search results, and advertisements) are the most problematic in terms of exposing users to delegitimizing headlines. We also illustrate how headline content varies when searching from a swing state, adopting a conspiratorial search keyword, or reading from media domains with higher political bias. We conclude with policy recommendations on data transparency that allow researchers to continue to monitor search engines during elections. 
    more » « less
  5. This paper introduces and presents a first analysis of a uniquely curated dataset of misinformation, disinformation, and rumors spreading on Twitter about the 2020 U.S. election. Previous research on misinformation—an umbrella term for false and misleading content—has largely focused either on broad categories, using a finite set of keywords to cover a complex topic, or on a few, focused case studies, with increased precision but limited scope. Our approach, by comparison, leverages real-time reports collected from September through November 2020 to develop a comprehensive dataset of tweets connected to 456 distinct misinformation stories from the 2020 U.S. election (our ElectionMisinfo2020 dataset), 307 of which sowed doubt in the legitimacy of the election. By relying on real-time incidents and streaming data, we generate a curated dataset that not only provides more granularity than a large collection based on a finite number of search terms, but also an improved opportunity for generalization compared to a small set of case studies. Though the emphasis is on misleading content, not all of the tweets linked to a misinformation story are false: some are questions, opinions, corrections, or factual content that nonetheless contributes to misperceptions. Along with a detailed description of the data, this paper provides an analysis of a critical subset of election-delegitimizing misinformation in terms of size, content, temporal diffusion, and partisanship. We label key ideological clusters of accounts within interaction networks, describe common misinformation narratives, and identify those accounts which repeatedly spread misinformation. We document the asymmetry of misinformation spread: accounts associated with support for President Biden shared stories in ElectionMisinfo2020 far less than accounts supporting his opponent. That asymmetry remained among the accounts who were repeatedly influential in the spread of misleading content that sowed doubt in the election: all but two of the top 100 ‘repeat spreader’ accounts were supporters of then-President Trump. These findings support the implementation and enforcement of ‘strike rules’ on social media platforms, directly addressing the outsized role of repeat spreaders. 
    more » « less