We present Chirpy Cardinal, an open-domain social chatbot. Aiming to be both informative and conversational, our bot chats with users in an authentic, emotionally intelligent way. By integrating controlled neural generation with scaffolded, hand-written dialogue, we let both the user and bot take turns driving the conversation, producing an engaging and socially fluent experience. Deployed in the fourth iteration of the Alexa Prize Socialbot Grand Challenge, Chirpy Cardinal handled thousands of conversations per day, placing second out of nine bots with an average user rating of 3.58/5.
more »
« less
Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations
We present Chirpy Cardinal, an open-domain dialogue agent, as a research platform
for the 2019 Alexa Prize competition. Building an open-domain socialbot
that talks to real people is challenging – such a system must meet multiple user
expectations such as broad world knowledge, conversational style, and emotional
connection. Our socialbot engages users on their terms – prioritizing their interests,
feelings and autonomy. As a result, our socialbot provides a responsive, personalized
user experience, capable of talking knowledgeably about a wide variety of
topics, as well as chatting empathetically about ordinary life. Neural generation
plays a key role in achieving these goals, providing the backbone for our conversational
and emotional tone. At the end of the competition, Chirpy Cardinal
progressed to the finals with an average rating of 3.6/5.0, a median conversation
duration of 2 minutes 16 seconds, and a 90th percentile duration of over 12 minutes.
more »
« less
- Award ID(s):
- 1900638
- PAR ID:
- 10318326
- Date Published:
- Journal Name:
- 3rd Proceedings of Alexa Prize (Alexa Prize 2019)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Conversational systems typically focus on functional tasks such as scheduling appointments or creating todo lists. Instead we design and evaluate SlugBot (SB), one of 8 semifinalists in the 2018 AlexaPrize, whose goal is to support casual open-domain social inter-action. This novel application requires both broad topic coverage and engaging interactive skills. We developed a new technical approach to meet this demanding situation by crowd-sourcing novel content and introducing playful conversational strategies based on storytelling and games. We collected over 10,000 conversations during August 2018 as part of the Alexa Prize competition. We also conducted an in-lab follow-up qualitative evaluation. Over-all users found SB moderately engaging; conversations averaged 3.6 minutes and involved 26 user turns. However, users reacted very differently to different conversation subtypes. Storytelling and games were evaluated positively; these were seen as entertaining with predictable interactive structure. They also led users to impute personality and intelligence to SB. In contrast, search and general Chit-Chat induced coverage problems; here users found it hard to infer what topics SB could understand, with these conversations seen as being too system-driven. Theoretical and design implications suggest a move away from conversational systems that simply provide factual information. Future systems should be designed to have their own opinions with personal stories to share, and SB provides an example of how we might achieve this.more » « less
-
Agency is essential to play. As we design conversational agents for early childhood, how might we increase the child-centeredness of our approaches? Giving children agency and control in choosing their agent representations might contribute to the overall playfulness of our designs. In this study with 33 children ages 4–5 years old, we engaged children in a creative storytelling interaction with conversational agents in stuffed animal embodiments. Young children conversed with the stuffed animal agents to tell stories about their creative play, engaging in question and answer conversation from 2 minutes to 24 minutes. We then interviewed the children about their perceptions of the agent’s voice, and their ideas for agent voices, dialogues, and interactions. From babies to robot daddies, we discover three themes from children’s suggestions: Family Voices, Robot Voices, and Character Voices. Additionally, children desire agents who (1) scaffold creative play in addition to storytelling, (2) foster personal, social, and emotional connections, and (3) support children’s agency and control. Across these themes, we recommend design strategies to support the overall playful child-centeredness of conversational agent design.more » « less
-
Conversational agents designed to interact through natural language are often imbued with human-like personalities. At times, the agent might also have a distinct persona with traits such as gender, age, or a backstory. Designing such personality or persona for conversational agents has become a common design practice. In this work, we review the emerging literature on designing agent persona or personality, and reflect on these approaches, along with the personas that are created for common conversational agents. We discuss open questions with regards to three aspects: meeting user needs, the ethics of deception, and reinforcing social stereotypes through conversational agents. We hope this work can provoke researchers and practitioners to critically reflect on their approach for designing personality or persona of conversational agents.more » « less
-
Abstract In an information-seeking conversation, a user may ask questions that are under-specified or unanswerable. An ideal agent would interact by initiating different response types according to the available knowledge sources. However, most current studies either fail to or artificially incorporate such agent-side initiative. This work presents InSCIt, a dataset for Information-Seeking Conversations with mixed-initiative Interactions. It contains 4.7K user-agent turns from 805 human-human conversations where the agent searches over Wikipedia and either directly answers, asks for clarification, or provides relevant information to address user queries. The data supports two subtasks, evidence passage identification and response generation, as well as a human evaluation protocol to assess model performance. We report results of two systems based on state-of-the-art models of conversational knowledge identification and open-domain question answering. Both systems significantly underperform humans, suggesting ample room for improvement in future studies.1more » « less