skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on January 7, 2027

Title: Understanding Partisan Bias in Judgments of Misinformation: Identity Protection Versus Differential Knowledge
People overaccept information that supports their identity and underaccept information that opposes their identity—a phenomenon known as partisan bias. Although partisan-bias effects in judgments of misinformation are robust and pervasive, there is ongoing debate about whether partisan-bias effects arise from identity-protective motivated reasoning or differential knowledge of identity-congenial versus identity-uncongenial information. Prior empirical work has been unable to differentiate the two accounts because of a reliance on groups with pre-existing differences in knowledge (e.g., Democrats and Republicans). The current research addresses this issue by using randomly assigned rather than pre-existing identities. Across two experiments (N = 1,411), adult U.S. Prolific workers showed lower thresholds for accepting information that is congenial versus uncongenial to a randomly assigned identity, despite having no differences in prior knowledge. These results support theories that emphasize identity protection as a factor underlying partisan bias in the acceptance of misinformation, with important practical implications for misinformation interventions.  more » « less
Award ID(s):
2040684
PAR ID:
10658458
Author(s) / Creator(s):
; ;
Publisher / Repository:
Sage
Date Published:
Journal Name:
Psychological Science
ISSN:
0956-7976
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent years have seen a surge in research on why people fall for misinformation and what can be done about it. Drawing on a framework that conceptualizes truth judgments of true and false information as a signal-detection problem, the current article identifies three inaccurate assumptions in the public and scientific discourse about misinformation: (1) People are bad at discerning true from false information, (2) partisan bias is not a driving force in judgments of misinformation, and (3) gullibility to false information is the main factor underlying inaccurate beliefs. Counter to these assumptions, we argue that (1) people are quite good at discerning true from false information, (2) partisan bias in responses to true and false information is pervasive and strong, and (3) skepticism against belief-incongruent true information is much more pronounced than gullibility to belief-congruent false information. These conclusions have significant implications for person-centered misinformation interventions to tackle inaccurate beliefs. 
    more » « less
  2. Researchers across many disciplines seek to understand how misinformation spreads with a view toward limiting its impact. One important question in this research is how people determine whether a given piece of news is real or fake. In the current article, we discuss the value of signal detection theory (SDT) in disentangling two distinct aspects in the identification of fake news: (a) ability to accurately distinguish between real news and fake news and (b) response biases to judge news as real or fake regardless of news veracity. The value of SDT for understanding the determinants of fake-news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news. Implications of SDT for the use of source-related information in the identification of fake news, interventions to improve people’s skills in detecting fake news, and the debunking of misinformation are discussed. 
    more » « less
  3. People form perceptions and interpretations of AI through external sources prior to their interaction with new technology. For example, shared anecdotes and media stories influence prior beliefs that may or may not accurately represent the true nature of AI systems. We hypothesize people's prior perceptions and beliefs will affect human-AI interactions and usage behaviors when using new applications. This paper presents a user experiment to explore the interplay between user's pre-existing beliefs about AI technology, individual differences, and previously established sources of cognitive bias from first impressions with an interactive AI application. We employed questionnaire measures as features to categorize users into profiles based on their prior beliefs and attitudes about technology. In addition, participants were assigned to one of two controlled conditions designed to evoke either positive or negative first impressions during an AI-assisted judgment task using an interactive application. The experiment and results provide empirical evidence that profiling users by surveying them on their prior beliefs and differences can be a beneficial approach for bias (and/or unanticipated usage) mitigation instead of seeking one-size-fits-all solutions. 
    more » « less
  4. Ubiquitous misinformation on social media threatens the health and well-being of young people. We review research on susceptibility to misinformation, why it spreads, and how these mechanisms might operate developmentally. Although we identify many research gaps, results suggest that cognitive ability, thinking styles, and metacognitive scrutiny of misinformation are protective, but early adverse experiences can bias information processing and sow seeds of mistrust. We find that content knowledge is not sufficient to protect against misinformation, but that it, along with life experiences, provides a foundation for gist plausibility (true in principle, rather than true at the level of verbatim details) that likely determines whether misinformation is accepted and shared. Thus, we present a theoretical framework based on fuzzy-trace theory that integrates the following: knowledge that distinguishes verbatim facts from gist (knowledge that is amplified by cognitive faculties and derived from trusted sources); personality as an information-processing filter colored by experiences; emotion as a product of interpreting the gist of information; and ideology that changes prior probabilities and gist interpretations of what is plausible. The young and the old may be at greatest risk because of their prioritization of social goals, a need that social media algorithms are designed to meet but at the cost of widespread exposure to misinformation. 
    more » « less
  5. Misinformation is widespread, but only some people accept the false information they encounter. This raises two questions: Who falls for misinformation, and why do they fall for misinformation? To address these questions, two studies investigated associations between 15 individual-difference dimensions and judgments of misinformation as true. Using Signal Detection Theory, the studies further investigated whether the obtained associations are driven by individual differences in truth sensitivity, acceptance threshold, or myside bias. For both political misinformation (Study 1) and misinformation about COVID-19 vaccines (Study 2), truth sensitivity was positively associated with cognitive reflection and actively open-minded thinking, and negatively associated with bullshit receptivity and conspiracy mentality. Although acceptance threshold and myside bias explained considerable variance in judgments of misinformation as true, neither showed robust associations with the measured individual-difference dimensions. The findings provide deeper insights into individual differences in misinformation susceptibility and uncover critical gaps in their scientific understanding. 
    more » « less