skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Social media algorithms can curb misinformation, but do they?
A recent article in Science by Guess et al. estimated the effect of Facebook's news feed algorithm on exposure to misinformation and political information among Facebook users. However, its reporting and conclusions did not account for a series of temporary emergency changes to Facebook's news feed algorithm in the wake of the 2020 U.S. presidential election that were designed to diminish the spread of voter-fraud misinformation. Here, we demonstrate that these emergency measures systematically reduced the amount of misinformation in the control group of the study, which was using the news feed algorithm. This issue may have led readers to misinterpret the results of the study and to conclude that the Facebook news feed algorithm used outside of the study period mitigates political misinformation as compared to reverse chronological feed.  more » « less
Award ID(s):
2432052
PAR ID:
10587827
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
AAAS
Date Published:
Journal Name:
Science
ISSN:
1095-9203
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Multiple recent efforts have used large-scale data and computational models to automatically detect misinformation in online news articles. Given the potential impact of misinformation on democracy, many of these efforts have also used the political ideology of these articles to better model misinformation and study political bias in such algorithms. However, almost all such efforts have used source level labels for credibility and political alignment, thereby assigning the same credibility and political alignment label to all articles from the same source (e.g., the New York Times or Breitbart). Here, we report on the impact of journalistic best practices to label individual news articles for their credibility and political alignment. We found that while source level labels are decent proxies for political alignment labeling, they are very poor proxies-almost the same as flipping a coin-for credibility ratings. Next, we study the implications of such source level labeling on downstream processes such as the development of automated misinformation detection algorithms and political fairness audits therein. We find that the automated misinformation detection and fairness algorithms can be suitably revised to support their intended goals but might require different assumptions and methods than those which are appropriate using source level labeling. The results suggest caution in generalizing recent results on misinformation detection and political bias therein. On a positive note, this work shares a new dataset of journalistic quality individually labeled articles and an approach for misinformation detection and fairness audits. 
    more » « less
  2. People are increasingly exposed to science and political information from social media. One consequence is that these sites play host to “alternative influencers,” who spread misinformation. However, content posted by alternative influencers on different social media platforms is unlikely to be homogenous. Our study uses computational methods to investigate how dimensions we refer to as audience and channel of social media platforms influence emotion and topics in content posted by “alternative influencers” on different platforms. Using COVID-19 as an example, we find that alternative influencers’ content contained more anger and fear words on Facebook and Twitter compared to YouTube. We also found that these actors discussed substantively different topics in their COVID-19 content on YouTube compared to Twitter and Facebook. With these findings, we discuss how the audience and channel of different social media platforms affect alternative influencers’ ability to spread misinformation online. 
    more » « less
  3. Newsfeed algorithms frequently amplify misinformation and other low-quality content. How can social media platforms more effectively promote reliable information? Existing approaches are difficult to scale and vulnerable to manipulation. In this paper, we propose using the political diversity of a website’s audience as a quality signal. Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 US residents, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards. We then incorporate audience diversity into a standard collaborative filtering framework and show that our improved algorithm increases the trustworthiness of websites suggested to users—especially those who most frequently consume misinformation—while keeping recommendations relevant. These findings suggest that partisan audience diversity is a valuable signal of higher journalistic standards that should be incorporated into algorithmic ranking decisions. 
    more » « less
  4. Misinformation poses a threat to democracy and to people’s health. Reliability criteria for news websites can help people identify misinformation. But despite their importance, there has been no empirically substantiated list of criteria for distinguishing reliable from unreliable news websites. We identify reliability criteria, describe how they are applied in practice, and compare them to prior work. Based on our analysis, we distinguish between manipulable and less manipulable criteria and compare politically diverse laypeople as end-users and journalists as expert users. We discuss 11 widely recognized criteria, including the following 6 criteria that are difficult to manipulate: content, political alignment, authors, professional standards, what sources are used, and a website’s reputation. Finally, we describe how technology may be able to support people in applying these criteria in practice to assess the reliability of websites. 
    more » « less
  5. Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media. 
    more » « less