skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on May 31, 2025

Title: Does It Matter Who Said It? Exploring the Impact of Deepfake-Enabled Profiles on User Perception towards Disinformation
Recently, deepfake techniques have been adopted by real-world adversaries to fabricate believable personas (posing as experts or insiders) in disinformation campaigns to promote false narratives and deceive the public. In this paper, we investigate how fake personas influence the user perception of the disinformation shared by such accounts. Using Twitter as an exemplary platform, we conduct a user study (N=417) where participants read tweets of fake news with (and without) the presence of the tweet authors' profiles. Our study examines and compares three types of fake profiles: deepfake profiles, profiles of relevant organizations, and simple bot profiles. Our results highlight the significant impact of deepfake and organization profiles on increasing the perceived information accuracy of and engagement with fake news. Moreover, deepfake profiles are rated as significantly more real than other profile types. Finally, we observe that users may like/reply/share a tweet even though they believe it was inaccurate (e.g., for fun or truth-seeking), which could further disseminate false information. We then discuss the implications of our findings and directions for future research.  more » « less
Award ID(s):
2030521 2055233
PAR ID:
10530421
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
AAAI
Date Published:
Journal Name:
Proceedings of the International AAAI Conference on Web and Social Media
Volume:
18
ISSN:
2162-3449
Page Range / eLocation ID:
1328 to 1341
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. As the scourge of “fake news” continues to plague our information environment, attention has turned toward devising automated solutions for detecting problematic online content. But, in order to build reliable algorithms for flagging “fake news,” we will need to go beyond broad definitions of the concept and identify distinguishing features that are specific enough for machine learning. With this objective in mind, we conducted an explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them. We identify seven different types of online content under the label of “fake news” (false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism) and contrast them with “real news” by introducing a taxonomy of operational indicators in four domains—message, source, structure, and network—that together can help disambiguate the nature of online news content. 
    more » « less
  2. Disinformation activities that aim to manipulate public opinion pose serious challenges to managing online platforms. One of the most widely used disinformation techniques is bot-assisted fake social engagement, which is used to falsely and quickly amplify the salience of information at scale. Based on agenda-setting theory, we hypothesize that bot-assisted fake social engagement boosts public attention in the manner intended by the manipulator. Leveraging a proven case of bot-assisted fake social engagement operation in a highly trafficked news portal, this study examines the impact of fake social engagement on the digital public’s news consumption, search activities, and political sentiment. For that purpose, we used ground-truth labels of the manipulator’s bot accounts, as well as real-time clickstream logs generated by ordinary public users. Results show that bot-assisted fake social engagement operations disproportionately increase the digital public’s attention to not only the topical domain of the manipulator’s interest (i.e., political news) but also to specific attributes of the topic (i.e., political keywords and sentiment) that align with the manipulator’s intention. We discuss managerial and policy implications for increasingly cluttered online platforms. 
    more » « less
  3. The evolving landscape of manipulated media, including the threat of deepfakes, has made information verification a daunting challenge for journalists. Technologists have developed tools to detect deepfakes, but these tools can sometimes yield inaccurate results, raising concerns about inadvertently disseminating manipulated content as authentic news. This study examines the impact of unreliable deepfake detection tools on information verification. We conducted role-playing exercises with 24 US journalists, immersing them in complex breaking-news scenarios where determining authenticity was challenging. Through these exercises, we explored questions regarding journalists’ investigative processes, use of a deepfake detection tool, and decisions on when and what to publish. Our findings reveal that journalists are diligent in verifying information, but sometimes rely too heavily on results from deepfake detection tools. We argue for more cautious release of such tools, accompanied by proper training for users to mitigate the risk of unintentionally propagating manipulated content as real news. 
    more » « less
  4. null (Ed.)
    In times of uncertainty, people often seek out information to help alleviate fear, possibly leaving them vulnerable to false information. During the COVID-19 pandemic, we attended to a viral spread of incorrect and misleading information that compromised collective actions and public health measures to contain the spread of the disease. We investigated the influence of fear of COVID-19 on social and cognitive factors including believing in fake news, bullshit receptivity, overclaiming, and problem-solving—within two of the populations that have been severely hit by COVID-19: Italy and the United States of America. To gain a better understanding of the role of misinformation during the early height of the COVID-19 pandemic, we also investigated whether problem-solving ability and socio-cognitive polarization were associated with believing in fake news. Results showed that fear of COVID-19 is related to seeking out information about the virus and avoiding infection in the Italian and American samples, as well as a willingness to share real news (COVID and non-COVID-related) headlines in the American sample. However, fear positively correlated with bullshit receptivity, suggesting that the pandemic might have contributed to creating a situation where people were pushed toward pseudo-profound existential beliefs. Furthermore, problem-solving ability was associated with correctly discerning real or fake news, whereas socio-cognitive polarization was the strongest predictor of believing in fake news in both samples. From these results, we concluded that a construct reflecting cognitive rigidity, neglecting alternative information, and black-and-white thinking negatively predicts the ability to discern fake from real news. Such a construct extends also to reasoning processes based on thinking outside the box and considering alternative information such as problem-solving. 
    more » « less
  5. Researchers across many disciplines seek to understand how misinformation spreads with a view toward limiting its impact. One important question in this research is how people determine whether a given piece of news is real or fake. In the current article, we discuss the value of signal detection theory (SDT) in disentangling two distinct aspects in the identification of fake news: (a) ability to accurately distinguish between real news and fake news and (b) response biases to judge news as real or fake regardless of news veracity. The value of SDT for understanding the determinants of fake-news beliefs is illustrated with reanalyses of existing data sets, providing more nuanced insights into how partisan bias, cognitive reflection, and prior exposure influence the identification of fake news. Implications of SDT for the use of source-related information in the identification of fake news, interventions to improve people’s skills in detecting fake news, and the debunking of misinformation are discussed. 
    more » « less