skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Chatbots Language Design: The Influence of Language Variation on User Experience with Tourist Assistant Chatbots
Chatbots are often designed to mimic social roles attributed to humans. However, little is known about the impact of using language that fails to conform to the associated social role. Our research draws on sociolinguistic to investigate how a chatbot’s language choices can adhere to the expected social role the agent performs within a context. We seek to understand whether chatbots design should account for linguistic register. This research analyzes how register differences play a role in shaping the user’s perception of the human-chatbot interaction. We produced parallel corpora of conversations in the tourism domain with similar content and varying register characteristics and evaluated users’ preferences of chatbot’s linguistic choices in terms of appropriateness, credibility, and user experience. Our results show that register characteristics are strong predictors of user’s preferences, which points to the needs of designing chatbots with register-appropriate language to improve acceptance and users’ perceptions of chatbot interactions.  more » « less
Award ID(s):
1900903
PAR ID:
10330243
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Computer-Human Interaction
Volume:
29
Issue:
2
ISSN:
1073-0516
Page Range / eLocation ID:
1 to 38
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    This work investigates how social agents can be designed to create a sense of ownership over them within a group of users. Social agents, such as conversational agents and chatbots, currently interact with people in impersonal, isolated, and often one-on-one interactions: one user and one agent. This is likely to change as agents become more socially sophisticated and integrated in social fabrics. Previous research has indicated that understanding who owns an agent can assist in creating expectations and understanding who an agent is accountable to within a group. We present findings from a three week case-study in which we implemented a chatbot that was successful in creating a sense of collective ownership within a community. We discuss the design choices that led to this outcome and implications for social agent design. 
    more » « less
  2. Chatbot systems have improved significantly because of the advances made in language modeling. These machine learning systems follow an end-to-end data-driven learning paradigm and are trained on large conversational datasets. Imperfections or harmful biases in the training datasets can cause the models to learn toxic behavior, and thereby expose their users to harmful responses. Prior work has focused on measuring the inherent toxicity of such chatbots, by devising queries that are more likely to produce toxic responses. In this work, we ask the question: How easy or hard is it to inject toxicity into a chatbot after deployment? We study this in a practical scenario known as Dialog-based Learning (DBL), where a chatbot is periodically trained on recent conversations with its users after deployment. A DBL setting can be exploited to poison the training dataset for each training cycle. Our attacks would allow an adversary to manipulate the degree of toxicity in a model and also enable control over what type of queries can trigger a toxic response. Our fully automated attacks only require LLM-based software agents masquerading as (malicious) users to inject high levels of toxicity. We systematically explore the vulnerability of popular chatbot pipelines to this threat. Lastly, we show that several existing toxicity mitigation strategies (designed for chatbots) can be significantly weakened by adaptive attackers. 
    more » « less
  3. Social chatbots are designed to build emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversation between a hypothetical chatbot and user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to post-observation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, thought these effects were small. Importantly, transparency appeared to have a larger effect in increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users. 
    more » « less
  4. Chakraborty, Pinaki (Ed.)
    Social chatbots are aimed at building emotional bonds with users, and thus it is particularly important to design these technologies so as to elicit positive perceptions from users. In the current study, we investigate the impacts that transparent explanations of chatbots’ mechanisms have on users’ perceptions of the chatbots. A total of 914 participants were recruited from Amazon Mechanical Turk. They were randomly assigned to observe conversations between a hypothetical chatbot and a user in one of the two-by-two experimental conditions: whether the participants received an explanation about how the chatbot was trained and whether the chatbot was framed as an intelligent entity or a machine. A fifth group, who believed they were observing interactions between two humans, served as a control. Analyses of participants’ responses to the postobservation survey indicated that transparency positively affected perceptions of social chatbots by leading users to (1) find the chatbot less creepy, (2) feel greater affinity to the chatbot, and (3) perceive the chatbot as more socially intelligent, though these effects were small. Moreover, transparency appeared to have a larger effect on increasing the perceived social intelligence among participants with lower prior AI knowledge. These findings have implications for the design of future social chatbots and support the addition of transparency and explanation for chatbot users. 
    more » « less
  5. Aim/Purpose: The purpose of this paper is to explore the efficacy of simulated interactive virtual conversations (chatbots) for mentoring underrepresented minority doctoral engineering students who are considering pursuing a career in the professoriate or in industry. Background: Chatbots were developed under the National Science Foundation INCLUDES Design and Developments Launch Pilot award (17-4458) and provide career advice with responses from a pre-programmed database populated by renowned emeriti engineering faculty. Chatbots have been engineered to fulfill a myriad of roles, such as undergraduate student advisement, but no research has been found that addresses their use with supplemental future faculty mentoring for doctoral students.Methodology: Chatbot efficacy is examined through a phenomenological design with focus groups with underrepresented minority doctoral engineering students. No theoretical or conceptual frameworks exist relative to chatbots designed for future faculty mentoring; therefore, an adaptation and implementation of the conceptual model posited on movie recommendations was utilized to ground this study. The four-stage process of phenomenological data analysis was followed: epoché, horizontalization, imaginative variation, and synthesis.Contribution: No studies have investigated the utility of chatbots in providing supplemental mentoring to future faculty. This phenomenological study contributes to this area of investigation and provides greater consideration into the unmet mentoring needs of these students, as well as the potential of utilizing chatbots for supplementary mentoring, particularly for those who lack access to high quality mentoring.Findings: Following the data analysis process, the essence of the findings was, while underrepresented minority doctoral engineering students have ample unmet mentoring needs and overall are satisfied with the user interface and trustworthiness of chatbots, their intent to use them is mixed due to a lack of personalization in this type of supplemental mentoring relationship.Recommendations for Practitioners: One of the major challenges faced by underrepresented doctoral engineering students is securing quality mentoring relationships that socialize them into the engineering culture and community of practice. While creating opportunities for students and incentivizing faculty to engage in the work of mentoring is needed, we must also consider the ways in which to leverage technology to offer supplemental future faculty mentoring virtually. Recommendation for Researchers: Additional research on the efficacy of chatbots in providing career-focused mentoring to future faculty is needed, as well as how to enhance the functionality of chatbots to create personal connections and networking opportunities, which are hallmarks of traditional mentoring relationships.Impact on Society: An understanding of the conceptual pathway that can lead to greater satisfaction with chatbots may serve to expand their use in the realm of mentoring. Scaling virtual faculty mentoring opportunities may be an important breakthrough in meeting mentoring needs across higher education.Future Research: Future chatbot research must focus on connecting chatbot users with human mentors; standardizing the process for response creation through additional data collection with a cadre of diverse, renowned faculty; engaging subject matter experts to conduct quality verification checks on responses; testing new responses with potential users; and launching the chatbots for a broad array of users. 
    more » « less