skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Perception of Emotion in Torso and Arm Movements on Humanoid Robot Quori
Displaying emotional states is an important part of nonverbal communication that can facilitate successful interactions. Facial expressions have been studied for their emotional expression, but this work looks at the capacity of body movements to convey different emotions. This work first generates a large set of nonverbal behaviors with a variety of torso and arm properties on a humanoid robot, Quori. Participants in a user study evaluated how much each movement displayed each of eight different emotions. Results indicate that specific movement properties are associated with particular emotions; such as leaning backward and arms held high displaying surprise and leaning forward displaying sadness. Understanding the emotions associated with certain movements can allow for the design of more appropriate behaviors during interactions with humans and could improve people’s perception of the robot.  more » « less
Award ID(s):
1939037
PAR ID:
10310641
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Companion of the International Conference on Human-Robot Interaction
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Movement forms the basis of our thoughts, emotions, and ways of being in the world. Informed by somaesthetics, we design for “taking up space” (e.g. encouraging expansive body movements), which may in turn alter our emotional experience. We demonstrate SoniSpace, an expressive movement interaction experience that uses movement sonification and visualization to encourage users to take up space with their body. We apply a first-person design approach to embed qualities of awareness, exploration, and comfort into the sound and visual design to promote authentic and enjoyable movement expression regardless of prior movement experience. Preliminary results from 20 user experiences with the system show that users felt more comfortable with taking up space and with movement in general following the interaction. We discuss our findings about designing for somatically-focused movement interactions and directions for future work. 
    more » « less
  2. Abstract Longstanding theories of emotion socialization postulate that caregiver emotional and behavioral reactions to a child's emotions together shape the child's emotion displays over time. Despite the notable importance of positive valence system function, the majority of research on caregiver emotion socialization focuses on negative valence system emotions. In the current project, we leveraged a relatively large cross‐sectional study of caregivers (N = 234; 93.59% White) of preschool aged children to investigate whether and to what degree, caregiver (1) emotional experiences, or (2) external behaviors, in the context of preschoolers’ positive emotion displays in caregiver–child interactions, are associated with children's general positive affect tendencies. Results indicated that, in the context of everyday caregiver–child interactions, caregiver‐reported positively valenced emotions but not approach behaviors were positively associated with child general positive affect tendencies. However, when examining specific caregiver behaviors in response to everyday child positive emotion displays, caregiver report of narrating the child's emotion and joining in the emotion with their child was positively associated with child general positive affect tendencies. Together, these results suggest that in everyday caregiver–child interactions, caregivers’ emotional experiences and attunement with the child play a role in shaping preschoolers’ overall tendencies toward positive affect. 
    more » « less
  3. Exoskeleton robots are capable of safe torque- controlled interactions with a wearer while moving their limbs through pre-defined trajectories. However, affecting and assist- ing the wearer’s movements while incorporating their inputs (effort and movements) effectively during an interaction re- mains an open problem due to the complex and variable nature of human motion. In this paper, we present a control algorithm that leverages task-specific movement behaviors to control robot torques during unstructured interactions by implementing a force field that imposes a desired joint angle coordination behavior. This control law, built by using principal component analysis (PCA), is implemented and tested with the Harmony exoskeleton. We show that the proposed control law is versatile enough to allow for the imposition of different coordination behaviors with varying levels of impedance stiffness. We also test the feasibility of our method for unstructured human-robot interaction. Specifically, we demonstrate that participants in a human-subject experiment are able to effectively perform reaching tasks while the exoskeleton imposes the desired joint coordination under different movement speeds and interaction modes. Survey results further suggest that the proposed control law may offer a reduction in cognitive or motor effort. This control law opens up the possibility of using the exoskeleton for training the participating in accomplishing complex m 
    more » « less
  4. Many large introductory classes are taught in stadium-style classrooms, which makes group work more difficult due to the room layout and immobile seating. These classrooms may create challenges for an instructor who wants to monitor student engagement because the layouts make it difficult to interact with the students as they work. Student nonverbal actions, such as eyes on the paper or an unsettled gaze, can be used to determine when students are actively engaged during group work. While other methods have been implemented to determine student actions during a class period, in larger settings these protocols require time-consuming data collection and cannot give in-the-moment feedback. In this study, student verbal and nonverbal interactions were analyzed and compared to determine the types of nonverbal interactions students take when collaboratively engaging in group work during lectures. It was found that a larger variety of nonverbal interactions, such as gesturing and leaning, were used when students were collaboratively working within their groups. Instructors of large enrollment classrooms can use the results of this work to aid in their facilitation of group work within stadium-style classrooms. 
    more » « less
  5. null (Ed.)
    Expressive behaviors conveyed during daily interactions are difficult to determine, because they often consist of a blend of different emotions. The complexity in expressive human communication is an important challenge to build and evaluate automatic systems that can reliably predict emotions. Emotion recognition systems are often trained with limited databases, where the emotions are either elicited or recorded by actors. These approaches do not necessarily reflect real emotions, creating a mismatch when the same emotion recognition systems are applied to practical applications. Developing rich emotional databases that reflect the complexity in the externalization of emotion is an important step to build better models to recognize emotions. This study presents the MSP-Face database, a natural audiovisual database obtained from video-sharing websites, where multiple individuals discuss various topics expressing their opinions and experiences. The natural recordings convey a broad range of emotions that are difficult to obtain with other alternative data collection protocols. A feature of the corpus is the addition of two sets. The first set includes videos that have been annotated with emotional labels using a crowd-sourcing protocol (9,370 recordings – 24 hrs, 41 m). The second set includes similar videos without emotional labels (17,955 recordings – 45 hrs, 57 m), offering the perfect infrastructure to explore semi-supervised and unsupervised machine-learning algorithms on natural emotional videos. This study describes the process of collecting and annotating the corpus. It also provides baselines over this new database using unimodal (audio, video) and multimodal emotional recognition systems. 
    more » « less