skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Attention:The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 7:00 AM ET to 7:30 AM ET on Friday, April 24 due to maintenance. We apologize for the inconvenience.


Title: Does It Matter Who Said It? Exploring the Impact of Deepfake-Enabled Profiles on User Perception towards Disinformation
Recently, deepfake techniques have been adopted by real-world adversaries to fabricate believable personas (posing as experts or insiders) in disinformation campaigns to promote false narratives and deceive the public. In this paper, we investigate how fake personas influence the user perception of the disinformation shared by such accounts. Using Twitter as an exemplary platform, we conduct a user study (N=417) where participants read tweets of fake news with (and without) the presence of the tweet authors' profiles. Our study examines and compares three types of fake profiles: deepfake profiles, profiles of relevant organizations, and simple bot profiles. Our results highlight the significant impact of deepfake and organization profiles on increasing the perceived information accuracy of and engagement with fake news. Moreover, deepfake profiles are rated as significantly more real than other profile types. Finally, we observe that users may like/reply/share a tweet even though they believe it was inaccurate (e.g., for fun or truth-seeking), which could further disseminate false information. We then discuss the implications of our findings and directions for future research.  more » « less
Award ID(s):
2030521 2055233
PAR ID:
10530421
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
AAAI
Date Published:
Journal Name:
Proceedings of the International AAAI Conference on Web and Social Media
Volume:
18
ISSN:
2162-3449
Page Range / eLocation ID:
1328 to 1341
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    As the scourge of “fake news” continues to plague our information environment, attention has turned toward devising automated solutions for detecting problematic online content. But, in order to build reliable algorithms for flagging “fake news,” we will need to go beyond broad definitions of the concept and identify distinguishing features that are specific enough for machine learning. With this objective in mind, we conducted an explication of “fake news” that, as a concept, has ballooned to include more than simply false information, with partisans weaponizing it to cast aspersions on the veracity of claims made by those who are politically opposed to them. We identify seven different types of online content under the label of “fake news” (false news, polarized content, satire, misreporting, commentary, persuasive information, and citizen journalism) and contrast them with “real news” by introducing a taxonomy of operational indicators in four domains—message, source, structure, and network—that together can help disambiguate the nature of online news content. 
    more » « less
  2. Disinformation activities that aim to manipulate public opinion pose serious challenges to managing online platforms. One of the most widely used disinformation techniques is bot-assisted fake social engagement, which is used to falsely and quickly amplify the salience of information at scale. Based on agenda-setting theory, we hypothesize that bot-assisted fake social engagement boosts public attention in the manner intended by the manipulator. Leveraging a proven case of bot-assisted fake social engagement operation in a highly trafficked news portal, this study examines the impact of fake social engagement on the digital public’s news consumption, search activities, and political sentiment. For that purpose, we used ground-truth labels of the manipulator’s bot accounts, as well as real-time clickstream logs generated by ordinary public users. Results show that bot-assisted fake social engagement operations disproportionately increase the digital public’s attention to not only the topical domain of the manipulator’s interest (i.e., political news) but also to specific attributes of the topic (i.e., political keywords and sentiment) that align with the manipulator’s intention. We discuss managerial and policy implications for increasingly cluttered online platforms. 
    more » « less
  3. Abstract The efficacy of fake news corrections in improving memory and belief accuracy may depend on how often adults see false information before it is corrected. Two experiments tested the competing predictions that repeating fake news before corrections will either impair or improve memory and belief accuracy. These experiments also examined whether fake news exposure effects would differ for younger and older adults due to age-related differences in the recollection of contextual details. Younger and older adults read real and fake news headlines that appeared once or thrice. Next, they identified fake news corrections among real news headlines. Later, recognition and cued recall tests assessed memory for real news, fake news, if corrections occurred, and beliefs in retrieved details. Repeating fake news increased detection and remembering of corrections, correct real news retrieval, and erroneous fake news retrieval. No age differences emerged for detection of corrections, but younger adults remembered corrections better than older adults. At test, correct fake news retrieval for earlier-detected corrections was associated with better real news retrieval. This benefit did not differ between age groups in recognition but was greater for younger than older adults in cued recall. When detected corrections were not remembered at test, repeated fake news increased memory errors. Overall, both age groups believed correctly retrieved real news more than erroneously retrieved fake news to a similar degree. These findings suggest that fake news repetition effects on subsequent memory accuracy depended on age differences in recollection-based retrieval of fake news and that it was corrected. 
    more » « less
  4. null (Ed.)
    In times of uncertainty, people often seek out information to help alleviate fear, possibly leaving them vulnerable to false information. During the COVID-19 pandemic, we attended to a viral spread of incorrect and misleading information that compromised collective actions and public health measures to contain the spread of the disease. We investigated the influence of fear of COVID-19 on social and cognitive factors including believing in fake news, bullshit receptivity, overclaiming, and problem-solving—within two of the populations that have been severely hit by COVID-19: Italy and the United States of America. To gain a better understanding of the role of misinformation during the early height of the COVID-19 pandemic, we also investigated whether problem-solving ability and socio-cognitive polarization were associated with believing in fake news. Results showed that fear of COVID-19 is related to seeking out information about the virus and avoiding infection in the Italian and American samples, as well as a willingness to share real news (COVID and non-COVID-related) headlines in the American sample. However, fear positively correlated with bullshit receptivity, suggesting that the pandemic might have contributed to creating a situation where people were pushed toward pseudo-profound existential beliefs. Furthermore, problem-solving ability was associated with correctly discerning real or fake news, whereas socio-cognitive polarization was the strongest predictor of believing in fake news in both samples. From these results, we concluded that a construct reflecting cognitive rigidity, neglecting alternative information, and black-and-white thinking negatively predicts the ability to discern fake from real news. Such a construct extends also to reasoning processes based on thinking outside the box and considering alternative information such as problem-solving. 
    more » « less
  5. The threat that online fake news and misinformation pose to democracy, justice, public confidence, and especially to vulnerable populations has led to a sharp increase in the need for fake news detection and intervention. Whether multi-modal or pure text-based, most existing fake news detection methods depend on textual analysis of entire articles. However, these fake news detection methods come with certain limitations. For instance, fake news detection methods that rely on full text can be computationally inefficient, demand large amounts of training data to achieve competitive accuracy, and may lack robustness across different datasets. This is because fake news datasets have strong variations in terms of the level and types of information they provide; where some can include large paragraphs of text with images and metadata, and others can be a few short sentences. Perhaps if one could only use minimal information to detect fake news, fake news detection methods could become more robust and resilient to the lack of information. We aim to overcome these limitations by detecting fake news using systematically selected, limited information that is both effective and capable of delivering robust, promising performance. We propose a framework called SLIM (Systematically-selected Limited Information) for fake news detection. In SLIM, we quantify the amount of information by introducing information-theoretic measures. SLIM leverages limited information (e.g., a few named entities) to achieve performance in fake news detection comparable to that of state-of-the-art obtained using the full text, even when the dataset is sparse. Furthermore, by combining various types of limited information, SLIM can perform even better while significantly reducing the quantity of information required for training compared to state-of-the-art language model-based fake news detection techniques. 
    more » « less