skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Turning Fake Data into Fake News: The A.I. Training Set as a Trojan Horse of Misinformation
Award ID(s):
2121572
PAR ID:
10559394
Author(s) / Creator(s):
; ;
Publisher / Repository:
University of San Diego School of Law
Date Published:
Journal Name:
San Diego law review
ISSN:
0036-4037
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Today's disinformation campaigns may use deceptively altered photographs to promote a false narrative. In some cases, viewers may be unaware of the alteration and thus may more readily accept the promoted narrative. In this work, we consider whether this effect can be lessened by explaining to the viewer how an image has been manipulated. To explore this idea, we conduct a two-part study. We started with a survey (n=113) to examine whether users are indeed bad at identifying manipulated images. Our result validated this conjecture as participants performed barely better than random guessing (60% accuracy). Then we explored our main hypothesis in a second survey (n=543). We selected manipulated images circulated on the Internet that pictured political figures and opinion influencers. Participants were divided into three groups to view the original (unaltered) images, the manipulated images, and the manipulated images with explanations, respectively. Each image represents a single case study and is evaluated independently of the others. We find that simply highlighting and explaining the manipulation to users was not always effective. When it was effective, it did help to make users less agreeing with the intended messages behind the manipulation. However, surprisingly, the explanation also had an opposite (e.g.,negative) effect on users' feeling/sentiment toward the subjects in the images. Based on these results, we discuss open-ended questions which could serve as the basis for future research in this area. 
    more » « less
  2. null (Ed.)
    It is common in online markets for agents to learn from other's actions. Such observational learning can lead to herding or information cascades in which agents eventually "follow the crowd". Models for such cascades have been well studied for Bayes-rational agents that choose pay-off optimal actions. In this paper, we additionally consider the presence of fake agents that seek to influence other agents into taking one particular action. To that end, these agents take a fixed action in order to influence the subsequent agents towards their preferred action. We characterize how the fraction of such fake agents impacts behavior of the remaining agents and show that in certain scenarios, an increase in the fraction of fake agents in fact reduces the chances of their preferred outcome. 
    more » « less
  3. Recent progress in natural language generation has raised dual-use concerns. While applications like summarization and translation are positive, the underlying technology also might enable adversaries to generate neural fake news: targeted propaganda that closely mimics the style of real news. Modern computer security relies on careful threat modeling: identifying potential threats and vulnerabilities from an adversary's point of view, and exploring potential mitigations to these threats. Likewise, developing robust defenses against neural fake news requires us first to carefully investigate and characterize the risks of these models. We thus present a model for controllable text generation called Grover. Given a headline like `Link Found Between Vaccines and Autism,' Grover can generate the rest of the article; humans find these generations to be more trustworthy than human-written disinformation. Developing robust verification techniques against generators like Grover is critical. We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data. Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy, demonstrating the importance of public release of strong generators. We investigate these results further, showing that exposure bias -- and sampling strategies that alleviate its effects -- both leave artifacts that similar discriminators can pick up on. We conclude by discussing ethical issues regarding the technology, and plan to release Grover publicly, helping pave the way for better detection of neural fake news. 
    more » « less
  4. Mobile games have become highly popular and profitable. While much work has been done to understand deceptive patterns of games and some unethical practices they apply, little is known about fake games, an emergent phenomenon in mobile gaming. To answer this question, we conducted two studies: a walkthrough method to characterize fake games, and a thematic analysis of user reviews to gain understanding from the user perspective. We found five types of misalignments that render a game fake and identified four primary facets of player experience with fake games. These misalignments act as realization points in the users' decision-making to define games as being fake. We discuss the fakeness of fake games, how the formation of an ecosystem helps with the circulation of fakeness, as well as challenges to governing fake games. Lastly, we propose implications for research and design on how to mitigate and identify fake games. 
    more » « less