skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Polite AI mitigates user susceptibility to AI hallucinations
With their increased capability, AI-based chatbots have become increasingly popular tools to help users answer complex queries. However, these chatbots may hallucinate, or generate incorrect but very plausible-sounding information, more frequently than previously thought. Thus, it is crucial to examine strategies to mitigate human susceptibility to hallucinated output. In a between-subjects experiment, participants completed a difficult quiz with assistance from either a polite or neutral-toned AI chatbot, which occasionally provided hallucinated (incorrect) information. Signal detection analysis revealed that participants interacting with polite-AI showed modestly higher sensitivity in detecting hallucinations and a more conservative response bias compared to those interacting with neutral-toned AI. While the observed effect sizes were modest, even small improvements in users’ ability to detect AI hallucinations can have significant consequences, particularly in high-stakes domains or when aggregated across millions of AI interactions.  more » « less
Award ID(s):
2421062
PAR ID:
10588544
Author(s) / Creator(s):
; ;
Publisher / Repository:
Taylor and Francis
Date Published:
Journal Name:
Ergonomics
ISSN:
0014-0139
Page Range / eLocation ID:
1 to 11
Subject(s) / Keyword(s):
AI hallucination automation etiquette chatbot
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social chatbots are designed to build emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversation between a hypothetical chatbot and user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to post-observation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, thought these effects were small. Importantly, transparency appeared to have a larger effect in increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users. 
    more » « less
  2. Chakraborty, Pinaki (Ed.)
    Social chatbots are aimed at building emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts that transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversations between a hypothetical chatbot and a user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to the postobservation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, though these effects were small. Moreover, transparency appeared to have a larger effect on increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users. 
    more » « less
  3. Novice programming students frequently engage in help-seeking to find information and learn about programming concepts. Among the available resources, generative AI (GenAI) chatbots appear resourceful, widely accessible, and less intimidating than human tutors. Programming instructors are actively integrating these tools into classrooms. However, our understanding of how novice programming students trust GenAI chatbots-and the factors influencing their usage-remains limited. To address this gap, we investigated the learning resource selection process of 20 novice programming students tasked with studying a programming topic. We split our participants into two groups: one using ChatGPT (n=10) and the other using a human tutor via Discord (n=10). We found that participants held strong positive perceptions of ChatGPT's speed and convenience but were wary of its inconsistent accuracy, making them reluctant to rely on it for learning entirely new topics. Accordingly, they generally preferred more trustworthy resources for learning (e.g., instructors, tutors), preferring ChatGPT for low-stakes situations or more introductory and common topics. We conclude by offering guidance to instructors on integrating LLM-based chatbots into their curricula-emphasizing verification and situational use-and to developers on designing chatbots that better address novices' trust and reliability concerns. 
    more » « less
  4. This paper investigates the implementation of AI-driven chatbots as a solution to streamline academic advising and improve the student experience. Through a review of preliminary results from the Nittany Advisor chatbot, we show how AI chatbots can boost advising efficiency, increase student satisfaction, and examine how chatbots can provide information on course requirements, prerequisites, and academic policies while suggesting the need for human intervention for more complex queries. We conclude that AI chatbots hold considerable promise for transforming academic advising by addressing routine questions, streamlining access to crucial information, and fostering a more responsive and supportive educational environment. 
    more » « less
  5. Generative AI, particularly Large Language Models (LLMs), has revolutionized human-computer interaction by enabling the generation of nuanced, human-like text. This presents new opportunities, especially in enhancing explainability for AI systems like recommender systems, a crucial factor for fostering user trust and engagement. LLM-powered AI-Chatbots can be leveraged to provide personalized explanations for recommendations. Although users often find these chatbot explanations helpful, they may not fully comprehend the content. Our research focuses on assessing how well users comprehend these explanations and identifying gaps in understanding. We also explore the key behavioral differences between users who effectively understand AI-generated explanations and those who do not. We designed a three-phase user study with 17 participants to explore these dynamics. The findings indicate that the clarity and usefulness of the explanations are contingent on the user asking relevant follow-up questions and having a motivation to learn. Comprehension also varies significantly based on users’ educational backgrounds. 
    more » « less