skip to main content

Search for: All records

Award ID contains: 2041068

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users'more »collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media.« less
    Free, publicly-accessible full text available October 13, 2022
  2. Online discussion platforms provide a forum to strengthen and propagate belief in misinformed conspiracy theories. Yet, they also offer avenues for conspiracy theorists to express their doubts and experiences of cognitive dissonance. Such expressions of dissonance may shed light on who abandons misguided beliefs and under what circumstances. This paper characterizes self-disclosures of dissonance about QAnon-a conspiracy theory initiated by a mysterious leader "Q" and popularized by their followers ?anons"-in conspiratorial subreddits. To understand what dissonance and disbelief mean within conspiracy communities, we first characterize their social imaginaries-a broad understanding of how people collectively imagine their social existence. Focusing onmore »2K posts from two image boards, 4chan and 8chan, and 1.2 M comments and posts from 12 subreddits dedicated to QAnon, we adopt a mixed-methods approach to uncover the symbolic language representing the movement,expectations,practices,heroes and foes of the QAnon community. We use these social imaginaries to create a computational framework for distinguishing belief and dissonance from general discussion about QAnon, surfacing in the 1.2M comments. We investigate the dissonant comments to characterize the dissonance expressed along QAnon social imaginaries. Further, analyzing user engagement with QAnon conspiracy subreddits, we find that self-disclosures of dissonance correlate with a significant decrease in user contributions and ultimately with their departure from the community. Our work offers a systematic framework for uncovering the dimensions and coded language related to QAnon social imaginaries and can serve as a toolbox for studying other conspiracy theories across different platforms. We also contribute a computational framework for identifying dissonance self-disclosures and measuring the changes in user engagement surrounding dissonance. Our work provide insights into designing dissonance based interventions that can potentially dissuade conspiracists from engaging in online conspiracy discussion communities.« less
    Free, publicly-accessible full text available October 13, 2022
  3. Widespread conspiracy theories, like those motivating anti-vaccination attitudes or climate change denial, propel collective action, and bear society-wide consequences. Yet, empirical research has largely studied conspiracy theory adoption as an individual pursuit, rather than as a socially mediated process. What makes users join communities endorsing and spreading conspiracy theories? We leverage longitudinal data from 56 conspiracy communities on Reddit to compare individual and social factors determining which users join the communities. Using a quasi-experimental approach, we first identify 30K future conspiracists?(FC) and30K matched non-conspiracists?(NC). We then provide empirical evidence of the importance of social factors across six dimensions relative tomore »the individual factors by analyzing 6 million Reddit comments and posts. Specifically, in social factors, we find that dyadic interactions with members of the conspiracy communities and marginalization outside of the conspiracy communities are the most important social precursors to conspiracy joining-even outperforming individual factor baselines. Our results offer quantitative backing to understand social processes and echo chamber effects in conspiratorial engagement, with important implications for democratic institutions and online communities.« less
  4. In recent years, the emergence of fake news outlets has drawn out the importance of news literacy. This is particularly critical in social media where the flood of information makes it difficult for people to assess the veracity of the false stories from such deceitful sources. Therefore, people oftentimes fail to look skeptically at these stories. We explore a way to circumvent this problem by nudging users into making conscious assessments of what online contents are credible. For this purpose, we developed FeedReflect, a browser extension. The extension nudges users to pay more attention and uses reflective questions to engagemore »in news credibility assessment on Twitter. We recruited a small number of university students to use this tool on Twitter. Both qualitative and quantitative analysis of the study suggests the extension helped people accurately assess the credibility of news. This implies FeedReflect can be used for the broader audience to improve online news literacy.« less
  5. Online communities play a crucial role in disseminating conspiracy theories. New theories often emerge in the aftermath of catastrophic events. Despite evidence of their widespread appeal, surprisingly little is known about who participates in these event-specific conspiratorial discussions or how do these discussions evolve over time. We study r/conspiracy, an active Reddit community of more than 200,000 users dedicated to conspiratorial discussions. By focusing on four tragic events and 10 years of discussions, we find three distinct user cohorts: joiners, who never participated in Reddit but joined r/conspiracy only after the event; converts who were active Reddit users but joinedmore »r/conspiracy only after the event; and veterans, who are longstanding r/conspiracy members. While joiners and converts have a shorter lifespan in the community in comparison to the veterans, joiners are more active during their shorter tenure, becoming increasingly engaged over time. Finally, to investigate how these events affect users’ conspiratorial discussions, we adopted a causal inference approach to analyze user comments around the time of the events. We find that discussions happening after the event exhibit signs of emotional shock, increased language complexity, and simultaneous expressions of certainty and doubtfulness. Our work provides insight on how online communities may detect new conspiracy theories that emerge ensuing dramatic events, and in the process stop them before they spread.« less