skip to main content


Title: Perception of Emotion in Torso and Arm Movements on Humanoid Robot Quori
Displaying emotional states is an important part of nonverbal communication that can facilitate successful interactions. Facial expressions have been studied for their emotional expression, but this work looks at the capacity of body movements to convey different emotions. This work first generates a large set of nonverbal behaviors with a variety of torso and arm properties on a humanoid robot, Quori. Participants in a user study evaluated how much each movement displayed each of eight different emotions. Results indicate that specific movement properties are associated with particular emotions; such as leaning backward and arms held high displaying surprise and leaning forward displaying sadness. Understanding the emotions associated with certain movements can allow for the design of more appropriate behaviors during interactions with humans and could improve people’s perception of the robot.  more » « less
Award ID(s):
1939037
NSF-PAR ID:
10310641
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Companion of the International Conference on Human-Robot Interaction
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other based on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

     
    more » « less
  2. This work describes the design of real-time dance-based interaction with a humanoid robot, where the robot seeks to promote physical activity in children by taking on multiple roles as a dance partner. It acts as a leader by initiating dances but can also act as a follower by mimicking a child’s dance movements. Dances in the leader role are produced by a sequence-to-sequence (S2S) Long Short-Term Memory (LSTM) network trained on children’s music videos taken from YouTube. On the other hand, a music orchestration platform is implemented to generate background music in the follower mode as the robot mimics the child’s poses. In doing so, we also incorporated the largely unexplored paradigm of learning-by-teaching by including multiple robot roles that allow the child to both learn from and teach to the robot. Our work is among the first to implement a largely autonomous, real-time full-body dance interaction with a bipedal humanoid robot that also explores the impact of the robot roles on child engagement. Importantly, we also incorporated in our design formal constructs taken from autism therapy, such as the least-to-most prompting hierarchy, reinforcements for positive behaviors, and a time delay to make behavioral observations. We implemented a multimodal child engagement model that encompasses both affective engagement (displayed through eye gaze focus and facial expressions) as well as task engagement (determined by the level of physical activity) to determine child engagement states. We then conducted a virtual exploratory user study to evaluate the impact of mixed robot roles on user engagement and found no statistically significant difference in the children’s engagement in single-role and multiple-role interactions. While the children were observed to respond positively to both robot behaviors, they preferred the music-driven leader role over the movement-driven follower role, a result that can partly be attributed to the virtual nature of the study. Our findings support the utility of such a platform in practicing physical activity but indicate that further research is necessary to fully explore the impact of each robot role. 
    more » « less
  3. Movement forms the basis of our thoughts, emotions, and ways of being in the world. Informed by somaesthetics, we design for “taking up space” (e.g. encouraging expansive body movements), which may in turn alter our emotional experience. We demonstrate SoniSpace, an expressive movement interaction experience that uses movement sonification and visualization to encourage users to take up space with their body. We apply a first-person design approach to embed qualities of awareness, exploration, and comfort into the sound and visual design to promote authentic and enjoyable movement expression regardless of prior movement experience. Preliminary results from 20 user experiences with the system show that users felt more comfortable with taking up space and with movement in general following the interaction. We discuss our findings about designing for somatically-focused movement interactions and directions for future work. 
    more » « less
  4. Exoskeleton robots are capable of safe torque- controlled interactions with a wearer while moving their limbs through pre-defined trajectories. However, affecting and assist- ing the wearer’s movements while incorporating their inputs (effort and movements) effectively during an interaction re- mains an open problem due to the complex and variable nature of human motion. In this paper, we present a control algorithm that leverages task-specific movement behaviors to control robot torques during unstructured interactions by implementing a force field that imposes a desired joint angle coordination behavior. This control law, built by using principal component analysis (PCA), is implemented and tested with the Harmony exoskeleton. We show that the proposed control law is versatile enough to allow for the imposition of different coordination behaviors with varying levels of impedance stiffness. We also test the feasibility of our method for unstructured human-robot interaction. Specifically, we demonstrate that participants in a human-subject experiment are able to effectively perform reaching tasks while the exoskeleton imposes the desired joint coordination under different movement speeds and interaction modes. Survey results further suggest that the proposed control law may offer a reduction in cognitive or motor effort. This control law opens up the possibility of using the exoskeleton for training the participating in accomplishing complex m 
    more » « less
  5. Pedagogical agents have the potential to provide not only cognitive support to learners but socio-emotional support through social behavior. Socioemotional support can be a critical element to a learner’s success, influencing their self-efficacy and motivation. Several social behaviors have been explored with pedagogical agents including facial expressions, movement, and social dialogue; social dialogue has especially been shown to positively influence interactions. In this work, we explore the role of paraverbal social behavior or social behavior in the form of paraverbal cues such as tone of voice and intensity. To do this, we focus on the phenomenon of entrainment, where individuals adapt their paraverbal features of speech to one another. Paraverbal entrainment in human-human studies has been found to be correlated with rapport and learning. In a study with 72 middle school students, we evaluate the effects of entrainment with a teachable robot, a pedagogical agent that learners teach how to solve ratio problems. We explore how a teachable robot which entrains and introduces social dialogue influences rapport and learning; we compare with two baseline conditions: a social condition, in which the robot speaks socially, and a non-social condition, in which the robot neither entrains nor speaks socially. We find that a robot that does entrain and speaks socially results in significantly more learning. 
    more » « less