We present a conversational agent designed to provide realistic conversational practice to older adults at risk of isolation or social anxiety, and show the results of a content analysis on a corpus of data collected from experiments with elderly patients interacting with our system. The conversational agent, represented by a virtual avatar, is designed to hold multiple sessions of casual conversation with older adults. Throughout each interaction, the system analyzes the prosodic and nonverbal behavior of users and provides feedback to the user in the form of periodic comments and suggestions on how to improve. Our avatar is unique in its ability to hold natural dialogues on a wide range of everyday topics—27 topics in three groups, developed in collaboration with a team of gerontologists. The three groups vary in “degrees of intimacy,” and as such in degrees of cognitive difficulty for the user. After collecting data from nine participants who interacted with the avatar for seven to nine sessions over a period of 3 to 4 weeks, we present results concerning dialogue behavior and inferred sentiment of the users. Analysis of the dialogues reveals correlations such as greater elaborateness for more difficult topics, increasing elaborateness with successive sessions, stronger sentiments in topics concerned with life goals rather than routine activities, and stronger self-disclosure for more intimate topics. In addition to their intrinsic interest, these results also reflect positively on the sophistication and practical applicability of our dialogue system.
more »
« less
MIA: Motivational Interviewing Agent for Improving Conversational Skills in Remote Group Discussions
Since online discussion platforms can limit the perception of social cues, effective collaboration over videochat requires additional attention to conversational skills. However, self-affirmation and defensive bias theories indicate that feedback may appear confrontational, especially when users are not motivated to incorporate them. We develop a feedback chatbot that employs Motivational Interviewing (MI), a directive counseling method that encourages commitment to behavior change, with the end goal of improving the user's conversational skills. We conduct a within-subject study with 21 participants in 8 teams to evaluate our MI-agent 'MIA' and a non-MI-agent 'Roboto'. After interacting with an agent, participants are tasked with conversing over videochat to evaluate candidate résumés for a job circular. Our quantitative evaluation shows that the MI-agent effectively motivates users, improves their conversational skills, and is likable. Through a qualitative lens, we present the strategies and the cautions needed to fulfill individual and team goals during group discussions. Our findings reveal the potential of the MI technique to improve collaboration and provide examples of conversational tactics important for optimal discussion outcomes.
more »
« less
- Award ID(s):
- 1750380
- PAR ID:
- 10329009
- Date Published:
- Journal Name:
- Proceedings of the ACM on Human-Computer Interaction
- Volume:
- 6
- Issue:
- GROUP
- ISSN:
- 2573-0142
- Page Range / eLocation ID:
- 1 to 24
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
From automated customer support to virtual assistants, conversational agents have transformed everyday interactions, yet despite phenomenal progress, no agent exists for programming tasks. To understand the design space of such an agent, we prototyped PairBuddy—an interactive pair programming partner—based on research from conversational agents, software engineering, education, human-robot interactions, psychology, and artificial intelligence. We iterated PairBuddy’s design using a series of Wizard-of-Oz studies. Our pilot study of six programmers showed promising results and provided insights toward PairBuddy’s interface design. Our second study of 14 programmers was positively praised across all skill levels. PairBuddy’s active application of soft skills—adaptability, motivation, and social presence—as a navigator increased participants’ confidence and trust, while its technical skills—code contributions, just-in-time feedback, and creativity support—as a driver helped participants realize their own solutions. PairBuddy takes the first step towards an Alexa-like programming partner.more » « less
-
Motivational agents are virtual agents that seek to motivate users by providing feedback and guidance. Prior work has shown how certain factors of an agent, such as the type of feedback given or the agent’s appearance, can influence user motivation when completing tasks. However, it is not known how nonverbal mirroring affects an agent’s ability to motivate users. Specifically, would an agent that mirrors be more motivating than an agent that does not? Would an agent trained on real human behaviors be better? We conducted a within-subjects study asking 30 participants to play a “find-the-hidden-object” game while interacting with a motivational agent that would provide hints and feedback on the user’s performance. We created three agents: a Control agent that did not respond to the user’s movements, a simple Mimic agent that mirrored the user’s movements on a delay, and a Complex agent that used a machine-learned behavior model. We asked participants to complete a questionnaire asking them to rate their levels of motivation and perceptions of the agent and its feedback. Our results showed that the Mimic agent was more motivating than the Control agent and more helpful than the Complex agent. We also found that when participants became aware of the mimicking behavior, it can feel weird or creepy; therefore, it is important to consider the detection of mimicry when designing virtual agents.more » « less
-
As robots become pervasive in human environments, it is important to enable users to effectively convey new skills without programming. Most existing work on Interactive Reinforcement Learning focuses on interpreting and incorporating non-expert human feedback to speed up learning; we aim to design a better representation of the learning agent that is able to elicit more natural and effective communication between the human trainer and the learner, while treating human feedback as discrete communication that depends probabilistically on the trainer’s target policy. This work entails a user study where participants train a virtual agent to accomplish tasks by giving reward and/or punishment in a variety of simulated environments. We present results from 60 participants to show how a learner can ground natural language commands and adapt its action execution speed to learn more efficiently from human trainers. The agent’s action execution speed can be successfully modulated to encourage more explicit feedback from a human trainer in areas of the state space where there is high uncertainty. Our results show that our novel adaptive speed agent dominates different fixed speed agents on several measures of performance. Additionally, we investigate the impact of instructions on user performance and user preference in training conditions.more » « less
-
As robots become pervasive in human environments, it is important to enable users to effectively convey new skills without programming. Most existing work on Interactive Reinforcement Learning focuses on interpreting and incorporating non-expert human feedback to speed up learning; we aim to design a better representation of the learning agent that is able to elicit more natural and effective communication between the human trainer and the learner, while treating human feedback as discrete communication that depends probabilistically on the trainer's target policy. This work entails a user study where participants train a virtual agent to accomplish tasks by giving reward and/or punishment in a variety of simulated environments. We present results from 60 participants to show how a learner can ground natural language commands and adapt its action execution speed to learn more efficiently from human trainers. The agent's action execution speed can be successfully modulated to encourage more explicit feedback from a human trainer in areas of the state space where there is high uncertainty. Our results show that our novel adaptive speed agent dominates different fixed speed agents on several measures of performance. Additionally, we investigate the impact of instructions on user performance and user preference in training conditions.more » « less
An official website of the United States government

