skip to main content

Title: Context-Dependent Models for Predicting and Characterizing Facial Expressiveness
In recent years, extensive research has emerged in affective computing on topics like automatic emotion recognition and determining the signals that characterize individual emotions. Much less studied, however, is expressiveness—the extent to which someone shows any feeling or emotion. Expressiveness is related to personality and mental health and plays a crucial role in social interaction. As such, the ability to automatically detect or predict expressiveness can facilitate significant advancements in areas ranging from psychiatric care to artificial social intelligence. Motivated by these potential applications, we present an extension of the BP4D+ data set [27] with human ratings of expressiveness and develop methods for (1) automatically predicting expressiveness from visual data and (2) defining relationships between interpretable visual signals and expressiveness. In addition, we study the emotional context in which expressiveness occurs and hypothesize that different sets of signals are indicative of expressiveness in different con-texts (e.g., in response to surprise or in response to pain). Analysis of our statistical models confirms our hypothesis. Consequently, by looking at expressiveness separately in distinct emotional contexts, our predictive models show significant improvements over baselines and achieve com-parable results to human performance in terms of correlation with the ground truth.
Authors:
; ;
Award ID(s):
1750439 1722822
Publication Date:
NSF-PAR ID:
10169267
Journal Name:
Proceedings of the 3rd Workshop of Affective Content Analysis
Sponsoring Org:
National Science Foundation
More Like this
  1. Articulation, emotion, and personality play strong roles in the orofacial movements. To improve the naturalness and expressiveness of virtual agents(VAs), it is important that we carefully model the complex interplay between these factors. This paper proposes a conditional generative adversarial network, called conditional sequential GAN(CSG), which learns the relationship between emotion, lexical content and lip movements in a principled manner. This model uses a set of spectral and emotional speech features directly extracted from the speech signal as conditioning inputs, generating realistic movements. A key feature of the approach is that it is a speech-driven framework that does not require transcripts. Our experiments show the superiority of this model over three state-of-the-art baselines in terms of objective and subjective evaluations. When the target emotion is known, we propose to create emotionally dependent models by either adapting the base model with the target emotional data (CSG-Emo-Adapted), or adding emotional conditions as the input of the model(CSG-Emo-Aware). Objective evaluations of these models show improvements for the CSG-Emo-Adapted compared with the CSG model, as the trajectory sequences are closer to the original sequences. Subjective evaluations show significantly better results for this model compared with the CSG model when the target emotion is happiness.
  2. As inborn characteristics, humans possess the ability to judge visual aesthetics, feel the emotions from the environment, and comprehend others’ emotional expressions. Many exciting applications become possible if robots or computers can be empowered with similar capabilities. Modeling aesthetics, evoked emotions, and emotional expressions automatically in unconstrained situations, however, is daunting due to the lack of a full understanding of the relationship between low-level visual content and high-level aesthetics or emotional expressions. With the growing availability of data, it is possible to tackle these problems using machine learning and statistical modeling approaches. In the talk, I provide an overview of our research in the last two decades on data-driven analyses of visual artworks and digital visual content for modeling aesthetics and emotions. First, I discuss our analyses of styles in visual artworks. Art historians have long observed the highly characteristic brushstroke styles of Vincent van Gogh and have relied on discerning these styles for authenticating and dating his works. In our work, we compared van Gogh with his contemporaries by statistically analyzing a massive set of automatically extracted brushstrokes. A novel extraction method is developed by exploiting an integration of edge detection and clustering-based segmentation. Evidence substantiates that van Gogh’smore »brushstrokes are strongly rhythmic. Next, I describe an effort to model the aesthetic and emotional characteristics in visual contents such as photographs. By taking a data-driven approach, using the Internet as the data source, we show that computers can be trained to recognize various characteristics that are highly relevant to aesthetics and emotions. Future computer systems equipped with such capabilities are expected to help millions of users in unimagined ways. Finally, I highlight our research on automated recognition of bodily expression of emotion. We propose a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize the body language of humans. Comprehensive statistical analysis revealed many interesting insights from the dataset. A system to model the emotional expressions based on bodily movements, named ARBEE (Automated Recognition of Bodily Expression of Emotion), has also been developed and evaluated.« less
  3. The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other basedmore »on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

    « less
  4. Emotion expression in human-robot interaction has been widely explored, however little is known about how such expressions should be coupled with feelings and opinions expressed by a social robot. We explored how 12 children experienced emotionally expressive social commentaries from a reading companion robot across five interaction styles that differed in their non-verbal emotional expressiveness and opinionated conversational styles (neutral, divergent, or convergent opinions). We found that, while the robot’s opinions and non-verbal emotion expressions affected children’s experiences with the robot, the speech content of the commentaries was the more prominent factor in their experience. Additionally, children differed in their perceptions of social commentary: while some expressed a sense of connection-making with the robot’s self-disclosure commentaries, others felt distracted by them or felt like the robot was off-topic. We recommend designers pay particular attention to the robot’s speech content and consider children’s individual differences in designing emotional and opinionated speech.
  5. Touch as a modality in social communication has been getting more attention with recent developments in wearable technology and an increase in awareness of how limited physical contact can lead to touch starvation and feelings of depression. Although several mediated touch methods have been developed for conveying emotional support, the transfer of emotion through mediated touch has not been widely studied. This work addresses this need by exploring emotional communication through a novel wearable haptic system. The system records physical touch patterns through an array of force sensors, processes the recordings using novel gesture-based algorithms to create actuator control signals, and generates mediated social touch through an array of voice coil actuators. We conducted a human subject study ( N = 20) to understand the perception and emotional components of this mediated social touch for common social touch gestures, including poking, patting, massaging, squeezing, and stroking. Our results show that the speed of the virtual gesture significantly alters the participants' ratings of valence, arousal, realism, and comfort of these gestures with increased speed producing negative emotions and decreased realism. The findings from the study will allow us to better recognize generic patterns from human mediated touch perception and determine howmore »mediated social touch can be used to convey emotion. Our system design, signal processing methods, and results can provide guidance in future mediated social touch design.« less