Deepfakes have become a dual-use technology with applications in the domains of art, science, and industry. However, the technology can also be leveraged maliciously in areas such as disinformation, identity fraud, and harassment. In response to the technology's dangerous potential many deepfake creation communities have been deplatformed, including the technology's originating community – r/deepfakes. Opening in February 2018, just eight days after the removal of r/deepfakes, MrDeepFakes (MDF) went online as a privately owned platform to fulfill the role of community hub, and has since grown into the largest dedicated deepfake creation and discussion platform currently online. This position of community hub is balanced against the site's other main purpose, which is the hosting of deepfake pornography depicting public figures- produced without consent. In this paper we explore the two largest deepfake communities that have existed via a mixed methods approach utilizing quantitative and qualitative analysis. We seek to identify how these platforms were and are used by their members, what opinions these deepfakers hold about the technology and how it is seen by society at large, and identify how deepfakes-as-disinformation is viewed by the community. We find that there is a large emphasis on technical discussion on these platforms, intermixed with potentially malicious content. Additionally, we find the deplatforming of deepfake communities early in the technology's life has significantly impacted trust regarding alternative community platforms.
more »
« less
Cross-Platform Disinformation Campaigns: Lessons Learned and Next Steps
We conducted a mixed-method, interpretative analysis of an online, cross-platform disinformation campaign targeting the White Helmets, a rescue group operating in rebel-held areas of Syria that have become the subject of a persistent effort of delegitimization. This research helps to conceptualize what a disinformation campaign is and how it works. Based on what we learned from this case study, we conclude that a comprehensive understanding of disinformation requires accounting for the spread of content across platforms and that social media platforms should increase collaboration to detect and characterize disinformation campaigns.
more »
« less
- PAR ID:
- 10171226
- Date Published:
- Journal Name:
- Harvard Kennedy School Misinformation Review
- Volume:
- 1
- Issue:
- 1
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The 2020 US election was accompanied by an effort to spread a false meta-narrative of widespread voter fraud. This meta-narrative took hold among a substantial portion of the US population, undermining trust in election procedures and results, and eventually motivating the events of 6 January 2021. We examine this effort as a domestic and participatory disinformation campaign in which a variety of influencers—including hyperpartisan media and political operatives—worked alongside ordinary people to produce and amplify misleading claims, often unwittingly. To better understand the nature of participatory disinformation, we examine three cases of misleading claims of voter fraud, applying an interpretive, mixed method approach to the analysis of social media data. Contrary to a prevailing view of such campaigns as coordinated and/or elite-driven efforts, this work reveals a more hybrid form, demonstrating both top-down and bottom-up dynamics that are more akin to cultivation and improvisation.more » « less
-
Disinformation activities that aim to manipulate public opinion pose serious challenges to managing online platforms. One of the most widely used disinformation techniques is bot-assisted fake social engagement, which is used to falsely and quickly amplify the salience of information at scale. Based on agenda-setting theory, we hypothesize that bot-assisted fake social engagement boosts public attention in the manner intended by the manipulator. Leveraging a proven case of bot-assisted fake social engagement operation in a highly trafficked news portal, this study examines the impact of fake social engagement on the digital public’s news consumption, search activities, and political sentiment. For that purpose, we used ground-truth labels of the manipulator’s bot accounts, as well as real-time clickstream logs generated by ordinary public users. Results show that bot-assisted fake social engagement operations disproportionately increase the digital public’s attention to not only the topical domain of the manipulator’s interest (i.e., political news) but also to specific attributes of the topic (i.e., political keywords and sentiment) that align with the manipulator’s intention. We discuss managerial and policy implications for increasingly cluttered online platforms.more » « less
-
What types of governance arrangements make some self-governed online groups more vulnerable to disinformation campaigns? We present a qualitative comparative analysis of the Croatian and Serbian Wikipedia editions to answer this question. We do so because between at least 2011 and 2020, the Croatian language version of Wikipedia was taken over by a small group of administrators who introduced far-right bias and outright disinformation. Dissenting editorial voices were reverted, banned, and blocked. Although Serbian, Bosnian, and Serbo-Croatian Wikipedias share many linguistic and cultural features, and faced similar threats, they seem to have largely avoided this fate. Based on a grounded theory analysis of interviews with members of these communities and others in cross-functional platform-level roles, we propose that the convergence of three features---high perceived value as a target, limited early bureaucratic openness, and a preference for personalistic, informal forms of organization over formal ones---produced a window of opportunity for governance capture on Croatian Wikipedia. Our findings illustrate that online community governing infrastructures can play a crucial role in systematic disinformation campaigns and other influence operations.more » « less
-
Social media has become an important method for information sharing. This has also created opportunities for bad actors to easily spread disinformation and manipulate public opinion. This paper explores the possibility of applying Authorship Verification on online communities to mitigate abuse by analyzing the writing style of online accounts to identify accounts managed by the same person. We expand on our similarity-based authorship verification approach, previously applied on large fanfictions, and show that it works in open-world settings, shorter documents, and is largely topic-agnostic. Our expanded model can link Reddit accounts based on the writing style of only 40 comments with an AUC of 0.95, and the performance increases to 0.98 given more content. We apply this model on a set of suspicious Reddit accounts associated with the disinformation campaign surrounding the 2016 U.S. presidential election and show that the writing style of these accounts are inconsistent, indicating that each account was likely maintained by multiple individuals. We also apply this model to Reddit user accounts that commented on the WallStreetBets subreddit around the 2021 GameStop short squeeze and show that a number of account pairs share very similar writing styles. We also show that this approach can link accounts across Reddit and Twitter with an AUC of 0.91 even when training data is very limited.more » « less
An official website of the United States government

