skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2025

Title: Generative Chatbots AIn’t experts: Exploring cognitive and metacognitive limitations that hinder expertise in generative Chatbots.
Despite their ability to answer complex questions, it is unclear whether generative chatbots should be considered experts in any domain. There are several important cognitive and metacognitive differences that separate human experts from generative chatbots. First, human experts’ domain knowledge is deep, efficiently structured, adaptive, and intuitive – whereas generative chatbots’ knowledge is shallow and inflexible, leading to errors that human experts would rarely make. Second, generative chatbots lack access to critical metacognitive capacities that allow humans to detect errors in their own thinking and communicate this information to others. Though generative chatbots may surpass human experts in the future – for now, the nature of their knowledge structures and metacognition prevent them from reaching true expertise.  more » « less
Award ID(s):
2333553
PAR ID:
10567236
Author(s) / Creator(s):
;
Publisher / Repository:
American Psychological Association
Date Published:
Journal Name:
Journal of Applied Research in Memory and Cognition
Volume:
13
Issue:
4
ISSN:
2211-3681
Page Range / eLocation ID:
490 to 494
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social anxiety (SA) has become increasingly prevalent. Traditional coping strategies often face accessibility challenges. Generative AI (GenAI), known for their knowledgeable and conversational capabilities, are emerging as alternative tools for mental well-being. With the increased integration of GenAI, it is important to examine individuals’ attitudes and trust in GenAI chatbots’ support for SA. Through a mixed-method approach that involved surveys (n = 159) and interviews (n = 17), we found that individuals with severe symptoms tended to trust and embrace GenAI chatbots more readily, valuing their non-judgmental support and perceived emotional comprehension. However, those with milder symptoms prioritized technical reliability. We identified factors influencing trust, such as GenAI chatbots’ ability to generate empathetic responses and its context-sensitive limitations, which were particularly important among individuals with SA. We also discuss the design implications and use of GenAI chatbots in fostering cognitive and emotional trust, with practical and design considerations. 
    more » « less
  2. Chatbots are often designed to mimic social roles attributed to humans. However, little is known about the impact of using language that fails to conform to the associated social role. Our research draws on sociolinguistic to investigate how a chatbot’s language choices can adhere to the expected social role the agent performs within a context. We seek to understand whether chatbots design should account for linguistic register. This research analyzes how register differences play a role in shaping the user’s perception of the human-chatbot interaction. We produced parallel corpora of conversations in the tourism domain with similar content and varying register characteristics and evaluated users’ preferences of chatbot’s linguistic choices in terms of appropriateness, credibility, and user experience. Our results show that register characteristics are strong predictors of user’s preferences, which points to the needs of designing chatbots with register-appropriate language to improve acceptance and users’ perceptions of chatbot interactions. 
    more » « less
  3. null (Ed.)
    Dialogue systems, also called chatbots, are now used in a wide range of applications. However, they still have some major weaknesses. One key weakness is that they are typically trained from manually-labeled data and/or written with handcrafted rules, and their knowledge bases (KBs) are also compiled by human experts. Due to the huge amount of manual effort involved, they are difficult to scale and also tend to produce many errors ought to their limited ability to understand natural language and the limited knowledge in their KBs. Thus, the level of user satisfactory is often low. In this paper, we propose to dramatically improve the situation by endowing the chatbots the ability to continually learn (1) new world knowledge, (2) new language expressions to ground them to actions, and (3) new conversational skills, during conversation by themselves so that as they chat more and more with users, they become more and more knowledgeable and are better and better able to understand diverse natural language expressions and to improve their conversational skills. 
    more » « less
  4. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less
  5. Aim/Purpose: The purpose of this paper is to explore the efficacy of simulated interactive virtual conversations (chatbots) for mentoring underrepresented minority doctoral engineering students who are considering pursuing a career in the professoriate or in industry. Background: Chatbots were developed under the National Science Foundation INCLUDES Design and Developments Launch Pilot award (17-4458) and provide career advice with responses from a pre-programmed database populated by renowned emeriti engineering faculty. Chatbots have been engineered to fulfill a myriad of roles, such as undergraduate student advisement, but no research has been found that addresses their use with supplemental future faculty mentoring for doctoral students.Methodology: Chatbot efficacy is examined through a phenomenological design with focus groups with underrepresented minority doctoral engineering students. No theoretical or conceptual frameworks exist relative to chatbots designed for future faculty mentoring; therefore, an adaptation and implementation of the conceptual model posited on movie recommendations was utilized to ground this study. The four-stage process of phenomenological data analysis was followed: epoché, horizontalization, imaginative variation, and synthesis.Contribution: No studies have investigated the utility of chatbots in providing supplemental mentoring to future faculty. This phenomenological study contributes to this area of investigation and provides greater consideration into the unmet mentoring needs of these students, as well as the potential of utilizing chatbots for supplementary mentoring, particularly for those who lack access to high quality mentoring.Findings: Following the data analysis process, the essence of the findings was, while underrepresented minority doctoral engineering students have ample unmet mentoring needs and overall are satisfied with the user interface and trustworthiness of chatbots, their intent to use them is mixed due to a lack of personalization in this type of supplemental mentoring relationship.Recommendations for Practitioners: One of the major challenges faced by underrepresented doctoral engineering students is securing quality mentoring relationships that socialize them into the engineering culture and community of practice. While creating opportunities for students and incentivizing faculty to engage in the work of mentoring is needed, we must also consider the ways in which to leverage technology to offer supplemental future faculty mentoring virtually. Recommendation for Researchers: Additional research on the efficacy of chatbots in providing career-focused mentoring to future faculty is needed, as well as how to enhance the functionality of chatbots to create personal connections and networking opportunities, which are hallmarks of traditional mentoring relationships.Impact on Society: An understanding of the conceptual pathway that can lead to greater satisfaction with chatbots may serve to expand their use in the realm of mentoring. Scaling virtual faculty mentoring opportunities may be an important breakthrough in meeting mentoring needs across higher education.Future Research: Future chatbot research must focus on connecting chatbot users with human mentors; standardizing the process for response creation through additional data collection with a cadre of diverse, renowned faculty; engaging subject matter experts to conduct quality verification checks on responses; testing new responses with potential users; and launching the chatbots for a broad array of users. 
    more » « less