skip to main content


Title: Micro-expressions in Animated Agents

The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other based on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

 
more » « less
Award ID(s):
1821894
NSF-PAR ID:
10340819
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
AHFE International
Volume:
22
ISSN:
2771-0718
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper reports ongoing research toward the design of multimodal affective pedagogical agents that are effective for different types of learners and applications. In particular, the work reported in the paper investigated the extent to which the type of character design (realistic versus stylized) affects students’ perception of an animated agent’s facial emotions, and whether the effects are moderated by learner characteristics (e.g. gender). Eighty-two participants viewed 10 animation clips featuring a stylized character exhibiting 5 different emotions, e.g. happiness, sadness, fear, surprise and anger (2 clips per emotion), and 10 clips featuring a realistic character portraying the same emotional states. The participants were asked to name the emotions and rate their sincerity, intensity, and typicality. The results indicated that for recognition, participants were slightly more likely to recognize the emotions displayed by the stylized agent, although the difference was not statistically significant. The stylized agent was on average rated significantly higher for facial emotion intensity, whereas the differences in ratings for typicality and sincerity across all emotions were not statistically significant. A significant difference in ratings was shown in regard to sadness (within typicality), happiness (within sincerity), fear, anger, sadness and happiness (within intensity) with the stylized agent rated higher. Gender was not a significant correlate across all emotions or for individual emotions. 
    more » « less
  2. The goal of this research is to develop Animated Pedagogical Agents (APA) that can convey clearly perceivable emotions through speech, facial expressions and body gestures. In particular, the two studies reported in the paper investigated the extent to which modifications to the range of movement of 3 beat gestures, e.g., both arms synchronous outward gesture, both arms synchronous forward gesture, and upper body lean, and the agent‘s gender have significant effects on viewer’s perception of the agent’s emotion in terms of valence and arousal. For each gesture the range of movement was varied at 2 discrete levels. The stimuli of the studies were two sets of 12-s animation clips generated using fractional factorial designs; in each clip an animated agent who speaks and gestures, gives a lecture segment on binomial probability. 50% of the clips featured a female agent and 50% of the clips featured a male agent. In the first study, which used a within-subject design and metric conjoint analysis, 120 subjects were asked to watch 8 stimuli clips and rank them according to perceived valence and arousal (from highest to lowest). In the second study, which used a between-subject design, 300 participants were assigned to two groups of 150 subjects each. One group watched 8 clips featuring the male agent and one group watched 8 clips featuring the female agent. Each participant was asked to rate perceived valence and arousal for each clip using a 7-point Likert scale. Results from the two studies suggest that the more open and forward the gestures the agent makes, the higher the perceived valence and arousal. Surprisingly, agents who lean their body forward more are not perceived as having higher arousal and valence. Findings also show that female agents’ emotions are perceived as having higher arousal and more positive valence that male agents’ emotions. 
    more » « less
  3. In general, people tend to identify the emotions of others from their facial expressions, however recent findings suggest that we may be more accurate when we hear someone’s voice than when we look only at their facial expression. The study reported in the paper examined whether these findings hold true for animated agents. A total of 37 subjects participated in the study: 19 males, 14 females, and 4 of non-specified gender. Subjects were asked to view 18 video stimuli; 9 clips featured a male agent and 9 clips a female agent. Each agent showed 3 different facial expressions (happy, angry, neutral), each one paired with 3 different voice lines spoken in three different tones (happy, angry, neutral). Hence, in some clips the agent’s tone of voice and facial expression were congruent, while in some videos they were not. Subjects answered questions regarding the emotion they believed the agent was feeling and rated the emotion intensity, typicality, and sincerity. Findings showed that emotion recognition rate and ratings of emotion intensity, typicality and sincerity were highest when the agent’s face and voice were congruent. However, when the channels were incongruent, subjects identified the emotion more accurately from the agent’s facial expression than the tone of voice. 
    more » « less
  4. We introduce a novel framework for emotional state detection from facial expression targeted to learning environments. Our framework is based on a convolutional deep neural network that classifies people’s emotions that are captured through a web-cam. For our classification outcome we adopt Russel’s model of core affect in which any particular emotion can be placed in one of four quadrants: pleasant-active, pleasant-inactive, unpleasant-active, and unpleasant-inactive. We gathered data from various datasets that were normalized and used to train the deep learning model. We use the fully-connected layers of the VGG_S network which was trained on human facial expressions that were manually labeled. We have tested our application by splitting the data into 80:20 and re-training the model. The overall test accuracy of all detected emotions was 66%. We have a working application that is capable of reporting the user emotional state at about five frames per second on a standard laptop computer with a web-cam. The emotional state detector will be integrated into an affective pedagogical agent system where it will serve as a feedback to an intelligent animated educational tutor. 
    more » « less
  5. Pedagogical agents are animated characters embedded within an e-learning environment to facilitate learning. With the growing understanding of the complex interplay between emotions and cognition, there is a need to design agents that can provide believable simulated emotional interactions with the learner. Best practices from the animation industry could be used to improve the believability of the agents. A well-known best practice is that the movements of limbs/torso/head play the most important role in conveying the character's emotion, followed by eyes/face and lip sync, respectively, in a long/medium shot. The researchers' study tested the validity of this best practice using statistical methods. It investigated the contribution of 3 body channels (torso/limbs/head, face, speech) to the expression of 5 emotions (happiness, sadness, anger, fear, surprise) in a stylized agent in a full body shot. Findings confirm the biggest contributor to the perceived believability of the animated emotion is the character's body, followed by face and speech respectively, across 4 out of 5 emotions. 
    more » « less