skip to main content

Title: Attention Dynamics During Emotion Recognition by Deaf and Hearing Individuals
Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.
Authors:
; ; ; ; ;
Award ID(s):
1748380
Publication Date:
NSF-PAR ID:
10182902
Journal Name:
The Journal of Deaf Studies and Deaf Education
ISSN:
1081-4159
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper demonstrates the utility of ambient-focal attention and pupil dilation dynamics to describe visual processing of emotional facial expressions. Pupil dilation and focal eye movements reflect deeper cognitive processing and thus shed more light on the dy- namics of emotional expression recognition. Socially anxious in- dividuals (N = 24) and non-anxious controls (N = 24) were asked to recognize emotional facial expressions that gradually morphed from a neutral expression to one of happiness, sadness, or anger in 10-sec animations. Anxious cohorts exhibited more ambient face scanning than their non-anxious counterparts. We observed a positive relationship between focal fixations and pupil dilation, indi- cating deeper processing of viewed faces, but only by non-anxious participants, and only during the last phase of emotion recognition. Group differences in the dynamics of ambient-focal attention sup- port the hypothesis of vigilance to emotional expression processing by socially anxious individuals. We discuss the results by referring to current literature on cognitive psychopathology.
  2. The overall goal of our research is to develop a system of intelligent multimodal affective pedagogical agents that are effective for different types of learners (Adamo et al., 2021). While most of the research on pedagogical agents tends to focus on the cognitive aspects of online learning and instruction, this project explores the less-studied role of affective (or emotional) factors. We aim to design believable animated agents that can convey realistic, natural emotions through speech, facial expressions, and body gestures and that can react to the students’ detected emotional states with emotional intelligence. Within the context of this goal, the specific objective of the work reported in the paper was to examine the extent to which the agents’ facial micro-expressions affect students’ perception of the agents’ emotions and their naturalness. Micro-expressions are very brief facial expressions that occur when a person either deliberately or unconsciously conceals an emotion being felt (Ekman &Friesen, 1969). Our assumption is that if the animated agents display facial micro expressions in addition to macro expressions, they will convey higher expressive richness and naturalness to the viewer, as “the agents can possess two emotional streams, one based on interaction with the viewer and the other basedmore »on their own internal state, or situation” (Queiroz et al. 2014, p.2).The work reported in the paper involved two studies with human subjects. The objectives of the first study were to examine whether people can recognize micro-expressions (in isolation) in animated agents, and whether there are differences in recognition based on the agent’s visual style (e.g., stylized versus realistic). The objectives of the second study were to investigate whether people can recognize the animated agents’ micro-expressions when integrated with macro-expressions, the extent to which the presence of micro + macro-expressions affect the perceived expressivity and naturalness of the animated agents, the extent to which exaggerating the micro expressions, e.g. increasing the amplitude of the animated facial displacements affects emotion recognition and perceived agent naturalness and emotional expressivity, and whether there are differences based on the agent’s design characteristics. In the first study, 15 participants watched eight micro-expression animations representing four different emotions (happy, sad, fear, surprised). Four animations featured a stylized agent and four a realistic agent. For each animation, subjects were asked to identify the agent’s emotion conveyed by the micro-expression. In the second study, 234 participants watched three sets of eight animation clips (24 clips in total, 12 clips per agent). Four animations for each agent featured the character performing macro-expressions only, four animations for each agent featured the character performing macro- + micro-expressions without exaggeration, and four animations for each agent featured the agent performing macro + micro-expressions with exaggeration. Participants were asked to recognize the true emotion of the agent and rate the emotional expressivity ad naturalness of the agent in each clip using a 5-point Likert scale. We have collected all the data and completed the statistical analysis. Findings and discussion, implications for research and practice, and suggestions for future work will be reported in the full paper. ReferencesAdamo N., Benes, B., Mayer, R., Lei, X., Meyer, Z., &Lawson, A. (2021). Multimodal Affective Pedagogical Agents for Different Types of Learners. In: Russo D., Ahram T., Karwowski W., Di Bucchianico G., Taiar R. (eds) Intelligent Human Systems Integration 2021. IHSI 2021. Advances in Intelligent Systems and Computing, 1322. Springer, Cham. https://doi.org/10.1007/978-3-030-68017-6_33Ekman, P., &Friesen, W. V. (1969, February). Nonverbal leakage and clues to deception. Psychiatry, 32(1), 88–106. https://doi.org/10.1080/00332747.1969.11023575 Queiroz, R. B., Musse, S. R., &Badler, N. I. (2014). Investigating Macroexpressions and Microexpressions in Computer Graphics Animated Faces. Presence, 23(2), 191-208. http://dx.doi.org/10.1162/

    « less
  3. The paper reports ongoing research toward the design of multimodal affective pedagogical agents that are effective for different types of learners and applications. In particular, the work reported in the paper investigated the extent to which the type of character design (realistic versus stylized) affects students’ perception of an animated agent’s facial emotions, and whether the effects are moderated by learner characteristics (e.g. gender). Eighty-two participants viewed 10 animation clips featuring a stylized character exhibiting 5 different emotions, e.g. happiness, sadness, fear, surprise and anger (2 clips per emotion), and 10 clips featuring a realistic character portraying the same emotional states. The participants were asked to name the emotions and rate their sincerity, intensity, and typicality. The results indicated that for recognition, participants were slightly more likely to recognize the emotions displayed by the stylized agent, although the difference was not statistically significant. The stylized agent was on average rated significantly higher for facial emotion intensity, whereas the differences in ratings for typicality and sincerity across all emotions were not statistically significant. A significant difference in ratings was shown in regard to sadness (within typicality), happiness (within sincerity), fear, anger, sadness and happiness (within intensity) with the stylized agent ratedmore »higher. Gender was not a significant correlate across all emotions or for individual emotions.« less
  4. In the 21st Century, it becomes of utmost importance for the educator and learner to be mindful of the evolution and application of factors that govern the mental state. Many studies revealed that the success of a professional is strongly dependent on their emotion management skills to manage themselves and associated responsibilities in a demanding environment. Emotionally intelligent professionals are also able to handle challenging situations involving other people. These days many industries, research establishments, and universities that hire graduate students conduct specialized training to enhance their soft skills, mainly interpersonal skills, to make their employees perform at their highest potential. One can maximize the gain from soft skills if they are well aware of the state of human psychology developed in the form of emotional intelligence and positive intelligence. In the last two decades, the concept of emotional intelligence was created by professional personality coaching groups. These trainings are heavily attended by professionals engaged in marketing and organization leaders to enhance their capability in the workplace. However, emotional intelligence is mainly about being aware of the mental state and maintaining control of one's actions during various mental states, such as anger, happiness, sadness, remorse, etc. Aspiring graduate students inmore »science and technology generally lack formal training in understanding human behavior and traits that can adversely impact their ability to perform and innovate at the highest level. This paper focuses on training graduate students about the popular and practical transactional analysis science and assessing their competence in utilizing this knowledge to decipher their own and other people's behavior. Transactional analysis was taught to students via Student presentation-based effective teaching (SPET) methodology. Under this approach, graduate students enrolled in the MECH 500 Class were provided a set of questions to answer by self-reading of the recommended textbook "I am OK You are OK by Thomas Harris." Each student individually answered the assignment questions and then worked in the group to prepare a group presentation for the in-class discussion. Three group discussions were conducted to present different views about the four types of transactions and underlying human traits. Before transactional analysis training, students were also trained in Positive intelligence psychology tools for a similar objective. After the discussion, students were surveyed about the depth of their understanding. Students also reflected their views on the utility of transactional analysis with respect to positive intelligence. More than 75% of students mention that they gain high competency in understanding, defining, and utilizing transactional analysis. This study presents insights for positively impacting graduate students' mindsets as they pursue an unpredicted course of research that can sometimes become very challenging.« less
  5. Raynal, Ann M. ; Ranney, Kenneth I. (Ed.)
    Most research in technologies for the Deaf community have focused on translation using either video or wearable devices. Sensor-augmented gloves have been reported to yield higher gesture recognition rates than camera-based systems; however, they cannot capture information expressed through head and body movement. Gloves are also intrusive and inhibit users in their pursuit of normal daily life, while cameras can raise concerns over privacy and are ineffective in the dark. In contrast, RF sensors are non-contact, non-invasive and do not reveal private information even if hacked. Although RF sensors are unable to measure facial expressions or hand shapes, which would be required for complete translation, this paper aims to exploit near real-time ASL recognition using RF sensors for the design of smart Deaf spaces. In this way, we hope to enable the Deaf community to benefit from advances in technologies that could generate tangible improvements in their quality of life. More specifically, this paper investigates near real-time implementation of machine learning and deep learning architectures for the purpose of sequential ASL signing recognition. We utilize a 60 GHz RF sensor which transmits a frequency modulation continuous wave (FMWC waveform). RF sensors can acquire a unique source of information that ismore »inaccessible to optical or wearable devices: namely, a visual representation of the kinematic patterns of motion via the micro-Doppler signature. Micro-Doppler refers to frequency modulations that appear about the central Doppler shift, which are caused by rotational or vibrational motions that deviate from principle translational motion. In prior work, we showed that fractal complexity computed from RF data could be used to discriminate signing from daily activities and that RF data could reveal linguistic properties, such as coarticulation. We have also shown that machine learning can be used to discriminate with 99% accuracy the signing of native Deaf ASL users from that of copysigning (or imitation signing) by hearing individuals. Therefore, imitation signing data is not effective for directly training deep models. But, adversarial learning can be used to transform imitation signing to resemble native signing, or, alternatively, physics-aware generative models can be used to synthesize ASL micro-Doppler signatures for training deep neural networks. With such approaches, we have achieved over 90% recognition accuracy of 20 ASL signs. In natural environments, however, near real-time implementations of classification algorithms are required, as well as an ability to process data streams in a continuous and sequential fashion. In this work, we focus on extensions of our prior work towards this aim, and compare the efficacy of various approaches for embedding deep neural networks (DNNs) on platforms such as a Raspberry Pi or Jetson board. We examine methods for optimizing the size and computational complexity of DNNs for embedded micro-Doppler analysis, methods for network compression, and their resulting sequential ASL recognition performance.« less