skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ruffin, Margie"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available June 3, 2025
  2. Recently, deepfake techniques have been adopted by real-world adversaries to fabricate believable personas (posing as experts or insiders) in disinformation campaigns to promote false narratives and deceive the public. In this paper, we investigate how fake personas influence the user perception of the disinformation shared by such accounts. Using Twitter as an exemplary platform, we conduct a user study (N=417) where participants read tweets of fake news with (and without) the presence of the tweet authors' profiles. Our study examines and compares three types of fake profiles: deepfake profiles, profiles of relevant organizations, and simple bot profiles. Our results highlight the significant impact of deepfake and organization profiles on increasing the perceived information accuracy of and engagement with fake news. Moreover, deepfake profiles are rated as significantly more real than other profile types. Finally, we observe that users may like/reply/share a tweet even though they believe it was inaccurate (e.g., for fun or truth-seeking), which could further disseminate false information. We then discuss the implications of our findings and directions for future research. 
    more » « less
    Free, publicly-accessible full text available May 31, 2025
  3. Today's disinformation campaigns may use deceptively altered photographs to promote a false narrative. In some cases, viewers may be unaware of the alteration and thus may more readily accept the promoted narrative. In this work, we consider whether this effect can be lessened by explaining to the viewer how an image has been manipulated. To explore this idea, we conduct a two-part study. We started with a survey (n=113) to examine whether users are indeed bad at identifying manipulated images. Our result validated this conjecture as participants performed barely better than random guessing (60% accuracy). Then we explored our main hypothesis in a second survey (n=543). We selected manipulated images circulated on the Internet that pictured political figures and opinion influencers. Participants were divided into three groups to view the original (unaltered) images, the manipulated images, and the manipulated images with explanations, respectively. Each image represents a single case study and is evaluated independently of the others. We find that simply highlighting and explaining the manipulation to users was not always effective. When it was effective, it did help to make users less agreeing with the intended messages behind the manipulation. However, surprisingly, the explanation also had an opposite (e.g.,negative) effect on users' feeling/sentiment toward the subjects in the images. Based on these results, we discuss open-ended questions which could serve as the basis for future research in this area. 
    more » « less