skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Cognitive cascades: How to model (and potentially counter) the spread of fake news
Understanding the spread of false or dangerous beliefs—often called misinformation or disinformation—through a population has never seemed so urgent. Network science researchers have often taken a page from epidemiologists, and modeled the spread of false beliefs as similar to how a disease spreads through a social network. However, absent from those disease-inspired models is an internal model of an individual’s set of current beliefs, where cognitive science has increasingly documented how the interaction between mental models and incoming messages seems to be crucially important for their adoption or rejection. Some computational social science modelers analyze agent-based models where individuals do have simulated cognition, but they often lack the strengths of network science, namely in empirically-driven network structures. We introduce a cognitive cascade model that combines a network science belief cascade approach with an internal cognitive model of the individual agents as in opinion diffusion models as a public opinion diffusion (POD) model, adding media institutions as agents which begin opinion cascades. We show that the model, even with a very simplistic belief function to capture cognitive effects cited in disinformation study (dissonance and exposure), adds expressive power over existing cascade models. We conduct an analysis of the cognitive cascade model with our simple cognitive function across various graph topologies and institutional messaging patterns. We argue from our results that population-level aggregate outcomes of the model qualitatively match what has been reported in COVID-related public opinion polls, and that the model dynamics lend insights as to how to address the spread of problematic beliefs. The overall model sets up a framework with which social science misinformation researchers and computational opinion diffusion modelers can join forces to understand, and hopefully learn how to best counter, the spread of disinformation and “alternative facts.”  more » « less
Award ID(s):
1934553 2021874
PAR ID:
10350059
Author(s) / Creator(s):
; ; ;
Editor(s):
Cremonini, Marco
Date Published:
Journal Name:
PLOS ONE
Volume:
17
Issue:
1
ISSN:
1932-6203
Page Range / eLocation ID:
e0261811
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Belief change and spread have been studied in many disciplines—from psychology, sociology, economics and philosophy, to biology, computer science and statistical physics—but we still do not have a firm grasp on why some beliefs change more easily and spread faster than others. To fully capture the complex social-cognitive system that gives rise to belief dynamics, we first review insights about structural components and processes of belief dynamics studied within different disciplines. We then outline a unifying quantitative framework that enables theoretical and empirical comparisons of different belief dynamic models. This framework uses a statistical physics formalism, grounded in cognitive and social theory, as well as empirical observations. We show how this framework can be used to integrate extant knowledge and develop a more comprehensive understanding of belief dynamics. 
    more » « less
  2. Recent years have seen a surge in research on why people fall for misinformation and what can be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) People are bad at discerning true from false information, (2) partisan bias is not a driving force in judgments of misinformation, and (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information, (2) partisan bias in responses to true and false information is pervasive and strong, and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs. 
    more » « less
  3. The widespread online misinformation could cause public panic and serious economic damages. The misinformation containment problem aims at limiting the spread of misinformation in online social networks by launching competing campaigns. Motivated by realistic scenarios, we present an analysis of the misinformation containment problem for the case when an arbitrary number of cascades are allowed. This paper makes four contributions. First, we provide a formal model for multi-cascade diffusion and introduce an important concept called as cascade priority. Second, we show that the misinformation containment problem cannot be approximated within a factor of Ω(2log1− n4) in polynomial time unless NP ⊆ DTIME(npolylog n). Third,weintroduceseveraltypesofcascadepriority thatarefrequentlyseeninrealsocialnetworks. Finally,wedesignnovelalgorithms for solving the misinformation containment problem. The effectiveness of the proposed algorithm is supported by encouraging experimental results. 
    more » « less
  4. To illuminate understanding of how social media can be leveraged to glean insights into public health issues such as e-cigarette use, we use a social media analytics and research testbed (SMART) dashboard to observe Twitter messages and follow content about e-cigarettes in different cities across the U.S. Our case studies indicate that the majority of e-cigarette tweets are positive (68%), which represents a potential problem for public health. Stigma plays the most important roles in both confirmed and rejected messages for e-cigarettes. We also noticed that some advocates of ecigarettes might be hybrid human-bot accounts (or multiple users using one account). Our key findings demonstrate the use of the SMART dashboard as a means of public healthrelated belief surveillance, and identification of campaign targets and informational needs of different communities in real-time. Future uses of this tool include monitoring social messages about e-cigarettes for combating the spread of tobacco-related misinformation and disinformation, and detecting and targeting informational needs of communities for intervention. 
    more » « less
  5. null (Ed.)
    Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, “true” and “false” tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines’ accuracy. Providing fact-checks after headlines ( debunking ) improved subsequent truth discernment more than providing the same information during ( labeling ) or before ( prebunking ) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers. 
    more » « less