skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: Understanding the Effects of AI-based Credibility Indicators When People Are Influenced By Both Peers and Experts
In an era marked by rampant online misinformation, artificial intelligence (AI) technologies have emerged as tools to combat this issue. This paper examines the effects of AI-based credibility indicators in people’s online information processing under the social influence from both peers and “experts”. Via three pre-registered, randomized experiments, we confirm the effectiveness of accurate AI-based credibility indicators to enhance people’s capability in judging information veracity and reduce their propensity to share false information, even under the influence from both laypeople peers and experts. Notably, these effects remain consistent regardless of whether experts’ expertise is verified, with particularly significant impacts when AI predictions disagree with experts. However, the competence of AI moderates the effects, as incorrect predictions can mislead people. Furthermore, exploratory analyses suggest that under our experimental settings, the impact of the AI-based credibility indicator is larger than that of the expert’s. Additionally, AI’s influence on people is partially mediated through peer influence, although people automatically discount the opinions of their laypeople peers when seeing an agreement between AI and peers’ opinions. We conclude by discussing the implications of utilizing AI to combat misinformation.  more » « less
Award ID(s):
2229876
PAR ID:
10663423
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
ACM
Date Published:
Page Range / eLocation ID:
1 to 19
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Misinformation on social media has become a serious concern. Marking news stories with credibility indicators, possibly generated by an AI model, is one way to help people combat misinformation. In this paper, we report the results of two randomized experiments that aim to understand the effects of AI-based credibility indicators on people's perceptions of and engagement with the news, when people are under social influence such that their judgement of the news is influenced by other people. We find that the presence of AI-based credibility indicators nudges people into aligning their belief in the veracity of news with the AI model's prediction regardless of its correctness, thereby changing people's accuracy in detecting misinformation. However, AI-based credibility indicators show limited impacts on influencing people's engagement with either real news or fake news when social influence exists. Finally, it is shown that when social influence is present, the effects of AI-based credibility indicators on the detection and spread of misinformation are larger as compared to when social influence is absent, when these indicators are provided to people before they form their own judgements about the news. We conclude by providing implications for better utilizing AI to fight misinformation. 
    more » « less
  2. Misinformation on social media poses significant societal challenges, particularly with the rise of large language models (LLMs) that can amplify its realism and reach. This study examines how adversarial social influence generated by LLM-powered bots affects people’s online information processing. Via a pre-registered, randomized human-subject experiment, we examined the effects of two types of LLM-driven adversarial influence: bots posting comments contrary to the news veracity and bots replying adversarially to human comments. Results show that both forms of influence significantly reduce participants’ ability to detect misinformation and discern true news from false. Additionally, adversarial comments were more effective than replies in discouraging the sharing of real news. The impact of these influences was moderated by political alignment, with participants more susceptible when the news conflicted with their political leanings. Guided by these findings, we conclude by discussing the targeted interventions to combat misinformation spread by adversarial social influences. 
    more » « less
  3. Concerns about the spread of misinformation online via news articles have led to the development of many tools and processes involving human annotation of their credibility. However, much is still unknown about how different people judge news credibility or the quality or reliability of news credibility ratings from populations of varying expertise. In this work, we consider credibility ratings from two “crowd” populations: 1) students within journalism or media programs, and 2) crowd workers on UpWork, and compare them with the ratings of two sets of experts: journalists and climate scientists, on a set of 50 climate-science articles. We find that both groups’ credibility ratings have higher correlation to journalism experts compared to the science experts, with 10-15 raters to achieve convergence. We also find that raters’ gender and political leaning impact their ratings. Among article genre of news/opinion/analysis and article source leaning of left/center/right, crowd ratings were more similar to experts respectively with opinion and strong left sources. 
    more » « less
  4. null (Ed.)
    Concerns about the spread of misinformation online via news articles have led to the development of many tools and processes involving human annotation of their credibility. However, much is still unknown about how different people judge news credibility or the quality or reliability of news credibility ratings from populations of varying expertise. In this work, we consider credibility ratings from two “crowd” populations: 1) students within journalism or media programs, and 2) crowd workers on UpWork, and compare them with the ratings of two sets of experts: journalists and climate scientists, on a set of 50 climate-science articles. We find that both groups’ credibility ratings have higher correlation to journalism experts compared to the science experts, with 10-15 raters to achieve convergence. We also find that raters’ gender and political leaning impact their ratings. Among article genre of news/opinion/analysis and article source leaning of left/center/right, crowd ratings were more similar to experts respectively with opinion and strong left sources. 
    more » « less
  5. Misinformation poses a threat to democracy and to people’s health. Reliability criteria for news websites can help people identify misinformation. But despite their importance, there has been no empirically substantiated list of criteria for distinguishing reliable from unreliable news websites. We identify reliability criteria, describe how they are applied in practice, and compare them to prior work. Based on our analysis, we distinguish between manipulable and less manipulable criteria and compare politically diverse laypeople as end-users and journalists as expert users. We discuss 11 widely recognized criteria, including the following 6 criteria that are difficult to manipulate: content, political alignment, authors, professional standards, what sources are used, and a website’s reputation. Finally, we describe how technology may be able to support people in applying these criteria in practice to assess the reliability of websites. 
    more » « less