Social chatbots are designed to build emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversation between a hypothetical chatbot and user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to post-observation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, thought these effects were small. Importantly, transparency appeared to have a larger effect in increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users.
more »
« less
This content will become publicly available on November 28, 2025
Polite AI mitigates user susceptibility to AI hallucinations
With their increased capability, AI-based chatbots have become increasingly popular tools to help users answer complex queries. However, these chatbots may hallucinate, or generate incorrect but very plausible-sounding information, more frequently than previously thought. Thus, it is crucial to examine strategies to mitigate human susceptibility to hallucinated output. In a between-subjects experiment, participants completed a difficult quiz with assistance from either a polite or neutral-toned AI chatbot, which occasionally provided hallucinated (incorrect) information. Signal detection analysis revealed that participants interacting with polite-AI showed modestly higher sensitivity in detecting hallucinations and a more conservative response bias compared to those interacting with neutral-toned AI. While the observed effect sizes were modest, even small improvements in users’ ability to detect AI hallucinations can have significant consequences, particularly in high-stakes domains or when aggregated across millions of AI interactions.
more »
« less
- Award ID(s):
- 2421062
- PAR ID:
- 10588544
- Publisher / Repository:
- Taylor and Francis
- Date Published:
- Journal Name:
- Ergonomics
- ISSN:
- 0014-0139
- Page Range / eLocation ID:
- 1 to 11
- Subject(s) / Keyword(s):
- AI hallucination automation etiquette chatbot
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Chakraborty, Pinaki (Ed.)Social chatbots are aimed at building emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts that transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversations between a hypothetical chatbot and a user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to the postobservation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, though these effects were small. Moreover, transparency appeared to have a larger effect on increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users.more » « less
-
This paper investigates the implementation of AI-driven chatbots as a solution to streamline academic advising and improve the student experience. Through a review of preliminary results from the Nittany Advisor chatbot, we show how AI chatbots can boost advising efficiency, increase student satisfaction, and examine how chatbots can provide information on course requirements, prerequisites, and academic policies while suggesting the need for human intervention for more complex queries. We conclude that AI chatbots hold considerable promise for transforming academic advising by addressing routine questions, streamlining access to crucial information, and fostering a more responsive and supportive educational environment.more » « less
-
Generative AI, particularly Large Language Models (LLMs), has revolutionized human-computer interaction by enabling the generation of nuanced, human-like text. This presents new opportunities, especially in enhancing explainability for AI systems like recommender systems, a crucial factor for fostering user trust and engagement. LLM-powered AI-Chatbots can be leveraged to provide personalized explanations for recommendations. Although users often find these chatbot explanations helpful, they may not fully comprehend the content. Our research focuses on assessing how well users comprehend these explanations and identifying gaps in understanding. We also explore the key behavioral differences between users who effectively understand AI-generated explanations and those who do not. We designed a three-phase user study with 17 participants to explore these dynamics. The findings indicate that the clarity and usefulness of the explanations are contingent on the user asking relevant follow-up questions and having a motivation to learn. Comprehension also varies significantly based on users’ educational backgrounds.more » « less
-
It’s critical to understand how to use artificial intelligence (AI) to foster innovation in the modern world as AI becomes more integrated into creative and problem-solving tasks. Using the sustainable washing machine as a primary example, this study designed and developed AI design assistant AIDA as a web-based chatbot to facilitate design ideation, leveraging large language models. AIDA prompts design tasks and assesses user-generated ideas for validity, novelty, and feasibility using RoBERTa-based models. As in the initial phase of an ongoing project, we conducted a human-subject experiment to validate a baseline version of AIDA and examined user performance and perceptions. The participants demonstrated smooth interaction with AIDA and consistent performance. They reported mostly positive perceived usefulness, enjoyment, and trust. Moreover, females and participants equal to or over 25 showed a comparable level of trust for general automated systems and AIDA, whereas male and under 25 participants were more skeptical about AIDA. This research offers a framework for technical development, tailored interactions, and real-time feedback, as well as insights into the use of AI chatbots to mediate engineering design. By analyzing user behavior and survey responses, we identified future directions in designing AI systems in engineering education and early-stage design.more » « less
An official website of the United States government
