skip to main content


Title: Multimodal Affective Pedagogical Agents for Different Types of Learners
The paper reports progress on an NSF-funded project whose goal is to research and develop multimodal affective animated pedagogical agents (APA) for different types of learners. Although the preponderance of research on APA tends to focus on the cognitive aspects of online learning, this project explores the less-studied role of affective features. More specifically, the objectives of the work are to: (1) research and develop novel algorithms for emotion recognition and for life-like emotion representation in embodied agents, which will be integrated in a new system for creating APA to be embedded in digital lessons; and (2) develop an empirically grounded research base that will guide the design of affective APA that are effective for different types of learners. This involves conducting a series of experiments to determine the effects of the agent’s emotional style and emotional intelligence on a diverse population of students. The paper outlines the work conducted so far, e.g., development of a new system (and underlying algorithms) for producing affective APA. It also reports the findings from two preliminary studies.  more » « less
Award ID(s):
1821894
NSF-PAR ID:
10276226
Author(s) / Creator(s):
Date Published:
Journal Name:
Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing
Volume:
1322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other based on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

     
    more » « less
  2. The paper reports ongoing research toward the design of multimodal affective pedagogical agents that are effective for different types of learners and applications. In particular, the work reported in the paper investigated the extent to which the type of character design (realistic versus stylized) affects students’ perception of an animated agent’s facial emotions, and whether the effects are moderated by learner characteristics (e.g. gender). Eighty-two participants viewed 10 animation clips featuring a stylized character exhibiting 5 different emotions, e.g. happiness, sadness, fear, surprise and anger (2 clips per emotion), and 10 clips featuring a realistic character portraying the same emotional states. The participants were asked to name the emotions and rate their sincerity, intensity, and typicality. The results indicated that for recognition, participants were slightly more likely to recognize the emotions displayed by the stylized agent, although the difference was not statistically significant. The stylized agent was on average rated significantly higher for facial emotion intensity, whereas the differences in ratings for typicality and sincerity across all emotions were not statistically significant. A significant difference in ratings was shown in regard to sadness (within typicality), happiness (within sincerity), fear, anger, sadness and happiness (within intensity) with the stylized agent rated higher. Gender was not a significant correlate across all emotions or for individual emotions. 
    more » « less
  3. Endowing automated agents with the ability to provide support, entertainment and interaction with human beings requires sensing of the users’ affective state. These affective states are impacted by a combination of emotion inducers, current psychological state, and various conversational factors. Although emotion classification in both singular and dyadic settings is an established area, the effects of these additional factors on the production and perception of emotion is understudied. This paper presents a new dataset, Multimodal Stressed Emotion (MuSE), to study the multimodal interplay between the presence of stress and expressions of affect. We describe the data collection protocol, the possible areas of use, and the annotations for the emotional content of the recordings. The paper also presents several baselines to measure the performance of multimodal features for emotion and stress classification. 
    more » « less
  4. One of the grand challenges of artificial intelligence and affective computing is for technology to become emotionally-aware and thus, more human-like. Modeling human emotions is particularly complicated when we consider the lived experiences of people who are on the autism spectrum. To understand the emotional experiences of autistic adults and their attitudes towards common representations of emotions, we deployed a context study as the first phase of a Grounded Design research project. Based on community observations and interviews, this work contributes empirical evidence of how the emotional experiences of autistic adults are entangled with social interactions as well as the processing of sensory inputs. We learned that (1) the emotional experiences of autistic adults are embodied and co-constructed within the context of physical environments, social relationships, and technology use, and (2) conventional approaches to visually representing emotion in affective education and computing systems fail to accurately represent the experiences and perceptions of autistic adults. We contribute a social-emotional-sensory design map to guide designers in creating more diverse and nuanced affective computing interfaces that are enriched by accounting for neurodivergent users. 
    more » « less
  5. The HSI (Hispanic Serving Institution) ATE (Advanced Technological Education) Hub 2 is a three-year collaborative research project funded by the National Science Foundation (NSF) that continues the partnership between two successful programs and involves a third partner in piloting professional development that draws upon findings from the initial program. The goal of HSI ATE Hub 2 is to improve outcomes for Latinx students in technician education programs through design, development, pilot delivery, and dissemination of a 3-tier professional development (PD) model for culturally responsive technician education at 2-year Hispanic Serving Institutions (HSIs). The project seeks to do this by developing the awareness and ability of faculty to appreciate, engage, and affirm the unique cultural identities of the students in their classes and use this connection to deepen students’ belonging and emerging identities as STEM learners and future STEM technicians. This paper shares the research foundations shaping this approach and the methods by which faculty professional development is being provided to develop this important and sensitive instructional capability in participating faculty. The tiered PD model features a scaffolded series of reflective and activity-oriented modules to incrementally enrich the instructional practices and mindset of HSI STEM educators and strengthen their repertoire of strategies for engaging culturally diverse students. Scaffolding that translates culturally responsive theory to practice spans each of the four distinct topic modules in each tier. Each topic module in a tier then scaffolds to a more advanced topic module in the next tier. Tier 1, Bienvenidos, welcomes HSI STEM educators who recognize the need to better serve their Latinx students, and want guidance for small practical activities to try with their students. Tier 2, Transformation through Action, immerses HSI STEM educators in additional activities that bring culturally responsive practices into their technician training while building capacity to collect evidence about impacts and outcomes for students. Tier 3, Engaging Community, strengthens leadership as HSI STEM educators disseminate results from activities completed in Tiers 1 and 2 at conferences that attract technician educators. Sharing the evidence-based practices and their outcomes contributes to achieving broader impacts in the Advanced Technological Education or ATE Community of NSF grantees. Westchester Community College (WCC), the first 2-year HSI in the State University of New York (SUNY) 64 campus system, is piloting the 3-tier PD model using virtual learning methods mastered through previous NSF ATE work and the COVID-19 context. During the pilot, over 20 WCC technician educators in three cohorts will develop leadership skills and practice culturally responsive methods. The pilot will build capacity within WCC STEM technician programs to better support the diversity of students, industry demand for a diverse workforce, and WCC’s capacity for future development of technician education programs. This first paper in a three part series describes the program goals and objectives, the 3-Tier PD model, and reports initial results for Cohort A’s engagement in the first three modules of Tier 1. 
    more » « less