skip to main content


Title: Impact of Interaction Context on the Student Affect-Learning Relationship in Child-Robot Interaction
Prior work in affect-aware educational robots has often relied on a common belief that the relationship between student affect and learning is independent of agent behaviors (child’s/robot’s) or unidirectional (positive/negative but not both) throughout the entire student-robot interaction.We argue that the student affect-learning relationship should be interpreted in two contexts: (1) social learning paradigm and (2) sub-events within child-robot interaction. In our paper, we examine two different social learning paradigms where children interact with a robot that acts either as a tutor or a tutee. Sub-events within child-robot interaction are defined as task-related events occurring in specific phases of an interaction (e.g., when the child/robot gets a wrong answer). We examine subevents at a macro level (entire interaction) and a micro level (within specific sub-events). In this paper, we provide an in-depth correlation analysis of children’s facial affect and vocabulary learning. We found that children’s affective displays became more predictive of their vocabulary learning when children interacted with a tutee robot who did not scaffold their learning. Additionally, children’s affect displayed during micro-level events was more predictive of their learning than during macro-level events. Last, we found that the affect-learning relationship is not unidirectional, but rather is modulated by context, i.e., several affective states facilitated student learning when displayed in some sub-events but inhibited learning when displayed in others. These findings indicate that both social learning paradigm and sub-events within interaction modulate student affect-learning relationship.  more » « less
Award ID(s):
1717362 1734443
NSF-PAR ID:
10183762
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
389 to 397
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This work describes the design of real-time dance-based interaction with a humanoid robot, where the robot seeks to promote physical activity in children by taking on multiple roles as a dance partner. It acts as a leader by initiating dances but can also act as a follower by mimicking a child’s dance movements. Dances in the leader role are produced by a sequence-to-sequence (S2S) Long Short-Term Memory (LSTM) network trained on children’s music videos taken from YouTube. On the other hand, a music orchestration platform is implemented to generate background music in the follower mode as the robot mimics the child’s poses. In doing so, we also incorporated the largely unexplored paradigm of learning-by-teaching by including multiple robot roles that allow the child to both learn from and teach to the robot. Our work is among the first to implement a largely autonomous, real-time full-body dance interaction with a bipedal humanoid robot that also explores the impact of the robot roles on child engagement. Importantly, we also incorporated in our design formal constructs taken from autism therapy, such as the least-to-most prompting hierarchy, reinforcements for positive behaviors, and a time delay to make behavioral observations. We implemented a multimodal child engagement model that encompasses both affective engagement (displayed through eye gaze focus and facial expressions) as well as task engagement (determined by the level of physical activity) to determine child engagement states. We then conducted a virtual exploratory user study to evaluate the impact of mixed robot roles on user engagement and found no statistically significant difference in the children’s engagement in single-role and multiple-role interactions. While the children were observed to respond positively to both robot behaviors, they preferred the music-driven leader role over the movement-driven follower role, a result that can partly be attributed to the virtual nature of the study. Our findings support the utility of such a platform in practicing physical activity but indicate that further research is necessary to fully explore the impact of each robot role. 
    more » « less
  2. Personalized education technologies capable of delivering adaptive interventions could play an important role in addressing the needs of diverse young learners at a critical time of school readiness. We present an innovative personalized social robot learning companion system that utilizes children’s verbal and nonverbal affective cues to modulate their engagement and maximize their long-term learning gains. We propose an affective reinforcement learning approach to train a personalized policy for each student during an educational activity where a child and a robot tell stories to each other. Using the personalized policy, the robot selects stories that are optimized for each child’s engagement and linguistic skill progression. We recruited 67 bilingual and English language learners between the ages of 4–6 years old to participate in a between-subjects study to evaluate our system. Over a three-month deployment in schools, a unique storytelling policy was trained to deliver a personalized story curriculum for each child in the Personalized group. We compared their engagement and learning outcomes to a Non-personalized group with a fixed curriculum robot, and a baseline group that had no robot intervention. In the Personalization condition, our results show that the affective policy successfully personalized to each child to boost their engagement and outcomes with respect to learning and retaining more target words as well as using more target syntax structures as compared to children in the other groups. 
    more » « less
  3. When young children create, they are exploring their emerging skills. And when young children reflect, they are transforming their learning experiences. Yet early childhood play environments often lack toys and tools to scaffold reflection. In this work, we design a stuffed animal robot to converse with young children and prompt creative reflection through open-ended storytelling. We also contribute six design goals for child-robot interaction design. In a hybrid Wizard of Oz study, 33 children ages 4-5 years old across 10 U.S. states engaged in creative play then conversed with a stuffed animal robot to tell a story about their creation. By analyzing children’s story transcripts, we discover four approaches that young children use when responding to the robot’s reflective prompting: Imaginative, Narrative Recall, Process-Oriented, and Descriptive Labeling. Across these approaches, we find that open-ended child-robot interaction can integrate personally meaningful reflective storytelling into diverse creative play practices. 
    more » « less
  4. Autonomous educational social robots can be used to help promote literacy skills in young children. Such robots, which emulate the emotive, perceptual, and empathic abilities of human teachers, are capable of replicating some of the benefits of one-on-one tutoring from human teachers, in part by leveraging individual student’s behavior and task performance data to infer sophisticated models of their knowledge. These student models are then used to provide personalized educational experiences by, for example, determining the optimal sequencing of curricular material. In this paper, we introduce an integrated system for autonomously analyzing and assessing children’s speech and pronunciation in the context of an interactive word game between a social robot and a child. We present a novel game environment and its computational formulation, an integrated pipeline for capturing and analyzing children’s speech in real-time, and an autonomous robot that models children’s word pronunciation via Gaussian Process Regression (GPR), augmented with an Active Learning protocol that informs the robot’s behavior. We show that the system is capable of autonomously assessing children’s pronunciation ability, with ground truth determined by a post-experiment evaluation by human raters. We also compare phoneme- and word-level GPR models and discuss trade-offs of each approach in modeling children’s pronunciation. Finally, we describe and analyze a pipeline for automatic analysis of children’s speech and pronunciation, including an evaluation of Speech Ace as a tool for future development of autonomous, speech-based language tutors. 
    more » « less
  5. Agency is essential to play. As we design conversational agents for early childhood, how might we increase the child-centeredness of our approaches? Giving children agency and control in choosing their agent representations might contribute to the overall playfulness of our designs. In this study with 33 children ages 4–5 years old, we engaged children in a creative storytelling interaction with conversational agents in stuffed animal embodiments. Young children conversed with the stuffed animal agents to tell stories about their creative play, engaging in question and answer conversation from 2 minutes to 24 minutes. We then interviewed the children about their perceptions of the agent’s voice, and their ideas for agent voices, dialogues, and interactions. From babies to robot daddies, we discover three themes from children’s suggestions: Family Voices, Robot Voices, and Character Voices. Additionally, children desire agents who (1) scaffold creative play in addition to storytelling, (2) foster personal, social, and emotional connections, and (3) support children’s agency and control. Across these themes, we recommend design strategies to support the overall playful child-centeredness of conversational agent design. 
    more » « less