Affective studies provide essential insights to address emotion recognition and tracking. In traditional open-loop structures, a lack of knowledge about the internal emotional state makes the system incapable of adjusting stimuli parameters and automatically responding to changes in the brain. To address this issue, we propose to use facial electromyogram measurements as biomarkers to infer the internal hidden brain state as feedback to close the loop. In this research, we develop a systematic way to track and control emotional valence, which codes emotions as being pleasant or obstructive. Hence, we conduct a simulation study by modeling and tracking the subject's emotional valence dynamics using state-space approaches. We employ Bayesian filtering to estimate the person-specific model parameters along with the hidden valence state, using continuous and binary features extracted from experimental electromyogram measurements. Moreover, we utilize a mixed-filter estimator to infer the secluded brain state in a real-time simulation environment. We close the loop with a fuzzy logic controller in two categories of regulation: inhibition and excitation. By designing a control action, we aim to automatically reflect any required adjustments within the simulation and reach the desired emotional state levels. Final results demonstrate that, by making use of physiological data, the proposed controller could effectively regulate the estimated valence state. Ultimately, we envision future outcomes of this research to support alternative forms of self-therapy by using wearable machine interface architectures capable of mitigating periods of pervasive emotions and maintaining daily well-being and welfare.
more »
« less
Deep Learning-Based Emotion Recognition from Real-Time Videos
We introduce a novel framework for emotional state detection from facial expression targeted to learning environments. Our framework is based on a convolutional deep neural network that classifies people’s emotions that are captured through a web-cam. For our classification outcome we adopt Russel’s model of core affect in which any particular emotion can be placed in one of four quadrants: pleasant-active, pleasant-inactive, unpleasant-active, and unpleasant-inactive. We gathered data from various datasets that were normalized and used to train the deep learning model. We use the fully-connected layers of the VGG_S network which was trained on human facial expressions that were manually labeled. We have tested our application by splitting the data into 80:20 and re-training the model. The overall test accuracy of all detected emotions was 66%. We have a working application that is capable of reporting the user emotional state at about five frames per second on a standard laptop computer with a web-cam. The emotional state detector will be integrated into an affective pedagogical agent system where it will serve as a feedback to an intelligent animated educational tutor.
more »
« less
- Award ID(s):
- 1821894
- PAR ID:
- 10276161
- Date Published:
- Journal Name:
- Kurosu M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction. HCII 2020. Lecture Notes in Computer Science
- Volume:
- 12182
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The International Affective Picture System (IAPS) contains 1,182 well-characterized photographs depicting natural scenes varying in affective content. These pictures are used extensively in affective neuroscience to investigate the neural correlates of emotional processing. Recently, in an effort to augment this dataset, we have begun to generate synthetic emotional images by combining IAPS pictures and diffusion-based AI models. The goal of this study is to compare the neural responses to IAPS pictures and matching AI-generated images. The stimulus set consisted of 60 IAPS pictures (20 pleasant, 20 neutral, 20 unpleasant) and 60 matching AI-generated images (20 pleasant, 20 neutral, 20 unpleasant). In a recording session, a total of 30 IAPS pictures and 30 matching AI-generated images were presented in random order, where each image was displayed for 3 seconds with neighboring images being separated by an interval of 2.8 to 3.5 seconds. Each experiment consisted of 10 recording sessions. The fMRI data was recorded on a 3T Siemens Prisma scanner. Pupil responses to image presentation were monitored using an MRI-compatible eyetracker. Our preliminary analysis of the fMRI data (N=3) showed that IAPS pictures and matching AI-generated images evoked similar neural responses in the visual cortex. In particular, MVPA (Multivariate Pattern Analysis) classifiers built to decode emotional categories from neural responses to IAPS pictures can be used to decode emotional categories from neural responses to AI-generated images and vice versa. Efforts to confirm these findings are underway by recruiting additional participants. Analysis is also being expanded to include the comparison of such measures as functional connectivity and pupillometry.more » « less
-
Abstract Neurotypical (NT) individuals and individuals with autism spectrum disorder (ASD) make different judgments of social traits from others’ faces; they also exhibit different social emotional responses in social interactions. A common hypothesis is that the differences in face perception in ASD compared with NT is related to distinct social behaviors. To test this hypothesis, we combined a face trait judgment task with a novel interpersonal transgression task that induces measures social emotions and behaviors. ASD and neurotypical participants viewed a large set of naturalistic facial stimuli while judging them on a comprehensive set of social traits (e.g., warm, charismatic, critical). They also completed an interpersonal transgression task where their responsibility in causing an unpleasant outcome to a social partner was manipulated. The purpose of the latter task was to measure participants’ emotional (e.g., guilt) and behavioral (e.g., compensation) responses to interpersonal transgression. We found that, compared with neurotypical participants, ASD participants’ self-reported guilt and compensation tendency was less sensitive to our responsibility manipulation. Importantly, ASD participants and neurotypical participants showed distinct associations between self-reported guilt and judgments of criticalness from others' faces. These findings reveal a novel link between perception of social traits and social emotional responses in ASD.more » « less
-
The incorporation of technology into primary and secondary education has facilitated the creation of curricula that utilize computational tools for problem-solving. In Open-Ended Learning Environments (OELEs), students participate in learning-by- modeling activities that enhance their understanding of (Science, technology, engineering, and mathematics) STEM and computational concepts. This research presents an innovative multimodal emotion recognition approach that analyzes facial expressions and speech data to identify pertinent learning-centered emotions, such as engagement, delight, confusion, frustration, and boredom. Utilizing sophisticated machine learning algorithms, including High-Speed Face Emotion Recognition (HSEmotion) model for visual data and wav2vec 2.0 for auditory data, our method is refined with a modality verification step and a fusion layer for accurate emotion classification. The multimodal technique significantly increases emotion detection accuracy, with an overall accuracy of 87%, and an Fl -score of 84%. The study also correlates these emotions with model building strategies in collaborative settings, with statistical analyses indicating distinct emotional patterns associated with effective and ineffective strategy use for tasks model construction and debugging tasks. These findings underscore the role of adaptive learning environments in fostering students' emotional and cognitive development.more » « less
-
Abstract The positivity principle states that people learn better from instructors who display positive emotions rather than negative emotions. In two experiments, students viewed a short video lecture on a statistics topic in which an instructor stood next to a series of slides as she lectured and then they took either an immediate test (Experiment 1) or a delayed test (Experiment 2). In a between-subjects design, students saw an instructor who used her voice, body movement, gesture, facial expression, and eye gaze to display one of four emotions while lecturing: happy (positive/active), content (positive/passive), frustrated (negative/active), or bored (negative/passive). First, learners were able to recognize the emotional tone of the instructor in an instructional video lecture, particularly by more strongly rating a positive instructor as displaying positive emotions and a negative instructor as displaying negative emotions (in Experiments 1 and 2). Second, concerning building a social connection during learning, learners rated a positive instructor as more likely to facilitate learning, more credible, and more engaging than a negative instructor (in Experiments 1 and 2). Third, concerning cognitive engagement during learning, learners reported paying more attention during learning for a positive instructor than a negative instructor (in Experiments 1 and 2). Finally, concerning learning outcome, learners who had a positive instructor scored higher than learners who had a negative instructor on a delayed posttest (Experiment 2) but not an immediate posttest (Experiment 1). Overall, there is evidence for the positivity principle and the cognitive-affective model of e-learning from which it is derived.more » « less
An official website of the United States government

