skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Private Life of QAnon: A Mixed Methods Investigation of Americans' Exposure to QAnon Content on the Web
The QAnon movement has been credited with spreading disinformation and fueling online radicalization in the United States and around the globe. While some research has documented publicly-visible communications and engagements with the QAnon movement, little work has examined individuals' actual exposure to QAnon content. In this paper, we investigate the extent to which Americans are exposed to QAnon websites, in what contexts, and to what effect. We employ a mixed methods review of 21 million website visits collected from a nationally representative sample of 1,238 American adults across laptops, smartphones, and tablets during the 2020 U.S. presidential election. Quantitative techniques reveal overall levels of exposure to QAnon and who is more likely to be exposed, and qualitative techniques provide rich information about how participants came to be exposed to QAnon and how it fit within their broader media diets. We find that: (1) exposure to QAnon websites is limited and stratified by political ideology and news consumption; (2) exposure tends to occur within right-wing media ecosystems that align with QAnon beliefs; and (3) mixed methods approaches to analyzing digital trace data can provide rich insights that contextualize quantitative techniques. We discuss the implications of our findings for the design of interventions to lessen exposure to problematic material online and future research on the spread of disinformation and extremist content.  more » « less
Award ID(s):
2243822
PAR ID:
10646604
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Proceedings of the ACM on Human-Computer Interaction
Date Published:
Journal Name:
Proceedings of the ACM on Human-Computer Interaction
Volume:
8
Issue:
CSCW2
ISSN:
2573-0142
Page Range / eLocation ID:
1 to 34
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social media provide a fertile ground where conspiracy theories and radical ideas can flourish, reach broad audiences, and sometimes lead to hate or violence beyond the online world itself. QAnon represents a notable example of a political conspiracy that started out on social media but turned mainstream, in part due to public endorsement by influential political figures. Nowadays, QAnon conspiracies often appear in the news, are part of political rhetoric, and are espoused by significant swaths of people in the United States. It is therefore crucial to understand how such a conspiracy took root online, and what led so many social media users to adopt its ideas. In this work, we propose a framework that exploits both social interaction and content signals to uncover evidence of user radicalization or support for QAnon. Leveraging a large dataset of 240M tweets collected in the run-up to the 2020 US Presidential election, we define and validate a multivariate metric of radicalization. We use that to separate users in distinct, naturally-emerging, classes of behaviors associated with radicalization processes, from self-declared QAnon supporters to hyper-active conspiracy promoters. We also analyze the impact of Twitter's moderation policies on the interactions among different classes: we discover aspects of moderation that succeed, yielding a substantial reduction in the endorsement received by hyperactive QAnon accounts. But we also uncover where moderation fails, showing how QAnon content amplifiers are not deterred or affected by the Twitter intervention. Our findings refine our understanding of online radicalization processes, reveal effective and ineffective aspects of moderation, and call for the need to further investigate the role social media play in the spread of conspiracies. 
    more » « less
  2. We conducted a mixed-method, interpretative analysis of an online, cross-platform disinformation campaign targeting the White Helmets, a rescue group operating in rebel-held areas of Syria that have become the subject of a persistent effort of delegitimization. This research helps to conceptualize what a disinformation campaign is and how it works. Based on what we learned from this case study, we conclude that a comprehensive understanding of disinformation requires accounting for the spread of content across platforms and that social media platforms should increase collaboration to detect and characterize disinformation campaigns. 
    more » « less
  3. Fitch, T.; Lamm, C.; Leder, H.; Teßmar-Raible, K. (Ed.)
    The QAnon conspiracy posits that Satan-worshiping Democrats operate a covert child sex-trafficking operation, which Donald Trump is destined to expose and annihilate. Emblematic of the ease with which political misconceptions can spread through social media, QAnon originated in late 2017 and rapidly grew to shape the political beliefs of millions. To illuminate the process by which a conspiracy theory spreads, we report two computational studies examining the social network structure and semantic content of tweets produced by users central to the early QAnon network on Twitter. Using data mined in the summer of 2018, we examined over 800,000 tweets about QAnon made by about 100,000 users. The majority of users disseminated rather than produced information, serving to create an online echochamber. Users appeared to hold a simplistic mental model in which political events are viewed as a struggle between antithetical forces—both observed and unobserved—of Good and Evil. 
    more » « less
  4. Online discussion platforms provide a forum to strengthen and propagate belief in misinformed conspiracy theories. Yet, they also offer avenues for conspiracy theorists to express their doubts and experiences of cognitive dissonance. Such expressions of dissonance may shed light on who abandons misguided beliefs and under what circumstances. This paper characterizes self-disclosures of dissonance about QAnon-a conspiracy theory initiated by a mysterious leader "Q" and popularized by their followers ?anons"-in conspiratorial subreddits. To understand what dissonance and disbelief mean within conspiracy communities, we first characterize their social imaginaries-a broad understanding of how people collectively imagine their social existence. Focusing on 2K posts from two image boards, 4chan and 8chan, and 1.2 M comments and posts from 12 subreddits dedicated to QAnon, we adopt a mixed-methods approach to uncover the symbolic language representing the movement,expectations,practices,heroes and foes of the QAnon community. We use these social imaginaries to create a computational framework for distinguishing belief and dissonance from general discussion about QAnon, surfacing in the 1.2M comments. We investigate the dissonant comments to characterize the dissonance expressed along QAnon social imaginaries. Further, analyzing user engagement with QAnon conspiracy subreddits, we find that self-disclosures of dissonance correlate with a significant decrease in user contributions and ultimately with their departure from the community. Our work offers a systematic framework for uncovering the dimensions and coded language related to QAnon social imaginaries and can serve as a toolbox for studying other conspiracy theories across different platforms. We also contribute a computational framework for identifying dissonance self-disclosures and measuring the changes in user engagement surrounding dissonance. Our work provide insights into designing dissonance based interventions that can potentially dissuade conspiracists from engaging in online conspiracy discussion communities. 
    more » « less
  5. This article explores how Twitter’s algorithmic timeline influences exposure to different types of external media. We use an agent-based testing method to compare chronological timelines and algorithmic timelines for a group of Twitter agents that emulated real-world archetypal users. We first find that algorithmic timelines exposed agents to external links at roughly half the rate of chronological timelines. Despite the reduced exposure, the proportional makeup of external links remained fairly stable in terms of source categories (major news brands, local news, new media, etc.). Notably, however, algorithmic timelines slightly increased the proportion of “junk news” websites in the external link exposures. While our descriptive evidence does not fully exonerate Twitter’s algorithm, it does characterize the algorithm as playing a fairly minor, supporting role in shifting media exposure for end users, especially considering upstream factors that create the algorithm’s input—factors such as human behavior, platform incentives, and content moderation. We conclude by contextualizing the algorithm within a complex system consisting of many factors that deserve future research attention. 
    more » « less