skip to main content


Title: Happy = Human: A Feeling of Belonging Modulates the “Expression-to-Mind” Effect
Past research has demonstrated a link between facial expressions and mind perception, yet why expressions, especially happy expressions, influence mind attribution remains unclear. Conducting four studies, we addressed this issue. In Study 1, we investigated whether the valence or behavioral intention (i.e., approach or avoidance) implied by different emotions affected the minds ascribed to expressers. Happy (positive valence and approach intention) targets were ascribed more sophisticated minds than were targets displaying neutral, angry (negative-approach), or fearful (negative-avoidance) expressions, suggesting emotional valence was relevant to mind attribution but apparent behavioral intentions were not. We replicated this effect using both Black and White targets (Study 2) and another face database (Study 3). In Study 4, we conducted path analyses to examine attractiveness and expectations of social acceptance as potential mediators of the effect. Our findings suggest that signals of social acceptance are crucial to the effect emotional expressions have on mind perception.  more » « less
Award ID(s):
1748461
NSF-PAR ID:
10396090
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Social Cognition
Volume:
40
Issue:
3
ISSN:
0278-016X
Page Range / eLocation ID:
213 to 227
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Inferring emotions from others’ non-verbal behavior is a pervasive and fundamental task in social interactions. Typically, real-life encounters imply the co-location of interactants, i.e., their embodiment within a shared spatial-temporal continuum in which the trajectories of the interaction partner’s Expressive Body Movement (EBM) create mutual social affordances. Shared Virtual Environments (SVEs) and Virtual Characters (VCs) are increasingly used to study social perception, allowing to reconcile experimental stimulus control with ecological validity. However, it remains unclear whether display modalities that enable co-presence have an impact on observers responses to VCs’ expressive behaviors. Drawing upon ecological approaches to social perception, we reasoned that sharing the space with a VC should amplify affordances as compared to a screen display, and consequently alter observers’ perceptions of EBM in terms of judgment certainty, hit rates, perceived expressive qualities (arousal and valence), and resulting approach and avoidance tendencies. In a between-subject design, we compared the perception of 54 10-s animations of VCs performing three daily activities (painting, mopping, sanding) in three emotional states (angry, happy, sad)—either displayed in 3D as a co-located VC moving in shared space, or as a 2D replay on a screen that was also placed in the SVEs. Results confirm the effective experimental control of the variable of interest, showing that perceived co-presence was significantly affected by the display modality, while perceived realism and immersion showed no difference. Spatial presence and social presence showed marginal effects. Results suggest that the display modality had a minimal effect on emotion perception. A weak effect was found for the expression “happy,” for which unbiased hit rates were higher in the 3D condition. Importantly, low hit rates were observed for all three emotion categories. However, observers judgments significantly correlated for category assignment and across all rating dimensions, indicating universal decoding principles. While category assignment was erroneous, though, ratings of valence and arousal were consistent with expectations derived from emotion theory. The study demonstrates the value of animated VCs in emotion perception studies and raises new questions regarding the validity of category-based emotion recognition measures.

     
    more » « less
  2. The expression of human emotion is integral to social interaction, and in virtual reality it is increasingly common to develop virtual avatars that attempt to convey emotions by mimicking these visual and aural cues, i.e. the facial and vocal expressions. However, errors in (or the absence of) facial tracking can result in the rendering of incorrect facial expressions on these virtual avatars. For example, a virtual avatar may speak with a happy or unhappy vocal inflection while their facial expression remains otherwise neutral. In circumstances where there is conflict between the avatar's facial and vocal expressions, it is possible that users will incorrectly interpret the avatar's emotion, which may have unintended consequences in terms of social influence or in terms of the outcome of the interaction. In this paper, we present a human-subjects study (N = 22) aimed at understanding the impact of conflicting facial and vocal emotional expressions. Specifically we explored three levels of emotional valence (unhappy, neutral, and happy) expressed in both visual (facial) and aural (vocal) forms. We also investigate three levels of head scales (down-scaled, accurate, and up-scaled) to evaluate whether head scale affects user interpretation of the conveyed emotion. We find significant effects of different multimodal expressions on happiness and trust perception, while no significant effect was observed for head scales. Evidence from our results suggest that facial expressions have a stronger impact than vocal expressions. Additionally, as the difference between the two expressions increase, the less predictable the multimodal expression becomes. For example, for the happy-looking and happy-sounding multimodal expression, we expect and see high happiness rating and high trust, however if one of the two expressions change, this mismatch makes the expression less predictable. We discuss the relationships, implications, and guidelines for social applications that aim to leverage multimodal social cues. 
    more » « less
  3. The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other based on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

     
    more » « less
  4. Background The physical and emotional well-being of women is critical for healthy pregnancy and birth outcomes. The Two Happy Hearts intervention is a personalized mind-body program coached by community health workers that includes monitoring and reflecting on personal health, as well as practicing stress management strategies such as mindful breathing and movement. Objective The aims of this study are to (1) test the daily use of a wearable device to objectively measure physical and emotional well-being along with subjective assessments during pregnancy, and (2) explore the user’s engagement with the Two Happy Hearts intervention prototype, as well as understand their experiences with various intervention components. Methods A case study with a mixed design was used. We recruited a 29-year-old woman at 33 weeks of gestation with a singleton pregnancy. She had no medical complications or physical restrictions, and she was enrolled in the Medi-Cal public health insurance plan. The participant engaged in the Two Happy Hearts intervention prototype from her third trimester until delivery. The Oura smart ring was used to continuously monitor objective physical and emotional states, such as resting heart rate, resting heart rate variability, sleep, and physical activity. In addition, the participant self-reported her physical and emotional health using the Two Happy Hearts mobile app–based 24-hour recall surveys (sleep quality and level of physical activity) and ecological momentary assessment (positive and negative emotions), as well as the Perceived Stress Scale, Center for Epidemiologic Studies Depression Scale, and State-Trait Anxiety Inventory. Engagement with the Two Happy Hearts intervention was recorded via both the smart ring and phone app, and user experiences were collected via Research Electronic Data Capture satisfaction surveys. Objective data from the Oura ring and subjective data on physical and emotional health were described. Regression plots and Pearson correlations between the objective and subjective data were presented, and content analysis was performed for the qualitative data. Results Decreased resting heart rate was significantly correlated with increased heart rate variability (r=–0.92, P<.001). We found significant associations between self-reported responses and Oura ring measures: (1) positive emotions and heart rate variability (r=0.54, P<.001), (2) sleep quality and sleep score (r=0.52, P<.001), and (3) physical activity and step count (r=0.77, P<.001). In addition, deep sleep appeared to increase as light and rapid eye movement sleep decreased. The psychological measures of stress, depression, and anxiety appeared to decrease from baseline to post intervention. Furthermore, the participant had a high completion rate of the components of the Two Happy Hearts intervention prototype and shared several positive experiences, such as an increased self-efficacy and a normal delivery. Conclusions The Two Happy Hearts intervention prototype shows promise for potential use by underserved pregnant women. 
    more » « less
  5. Abstract RESEARCH HIGHLIGHTS

    Chinese preschoolers displayed lower overall positive and negative expressions relative to their US counterparts without considering situational contexts.

    Chinese preschoolers displayed similar levels of emotion expressions as their US counterparts during an achievement‐related challenge salient to their social‐cultural environment.

    Chinese preschoolers are particularly responsive to achievement‐related challenges, relative to other emotion‐challenging situations that are less culturally salient.

    No cortisol increase was observed in any of the emotion‐challenging paradigms among US preschoolers.

    Children's emotion expression and biological reactivity may be most responsive to challenges relevant to their socio‐cultural environments.

     
    more » « less