skip to main content

Title: Common Functional Brain States Encode both Perceived Emotion and the Psychophysiological Response to Affective Stimuli
Multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has critically advanced the neuroanatomical understanding of affect processing in the human brain. Central to these advancements is the brain state, a temporally-succinct fMRI-derived pattern of neural activation, which serves as a processing unit. Establishing the brain state’s central role in affect processing, however, requires that it predicts multiple independent measures of affect. We employed MVPA-based regression to predict the valence and arousal properties of visual stimuli sampled from the International Affective Picture System (IAPS) along with the corollary skin conductance response (SCR) for demographically diverse healthy human participants (n = 19). We found that brain states significantly predicted the normative valence and arousal scores of the stimuli as well as the attendant individual SCRs. In contrast, SCRs significantly predicted arousal only. The prediction effect size of the brain state was more than three times greater than that of SCR. Moreover, neuroanatomical analysis of the regression parameters found remarkable agreement with regions long-established by fMRI univariate analyses in the emotion processing literature. Finally, geometric analysis of these parameters also found that the neuroanatomical encodings of valence and arousal are orthogonal as originally posited by the circumplex model of dimensional emotion.
; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Scientific reports
Page Range or eLocation-ID:
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Across multiple domains of social perception - including social categorization, emotion perception, impression formation, and mentalizing - multivariate pattern analysis (MVPA) of fMRI data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.
  2. Patterns of estimated neural activity derived from resting state functional magnetic resonance imaging (rs-fMRI) have been shown to predict a wide range of cognitive and behavioral outcomes in both normative and clinical populations. Yet, without links to established cognitive processes, the functional brain states associated with the resting brain will remain unexplained, and potentially confounded, markers of individual differences. In this work we demonstrate the application of multivoxel pattern classifiers (MVPCs) to predict the valence and arousal properties of spontaneous affect processing in the task-non-engaged resting state. rs-fMRI data were acquired from subjects that were held out from a subject set that underwent image-based affect induction concurrent with fMRI to train the MVPCs. We also validated these affective predictions against a well-established, independent measure of autonomic arousal, skin conductance response. These findings suggest a new neuroimaging methodology for resting state analysis in which models, trained on cognition-specific task-based fMRI acquired from well-matched cohorts, capably predict hidden cognitive processes operating within the resting brain.
  3. Abstract

    In this paper, a hardware-optimized approach to emotion recognition based on the efficient brain-inspired hyperdimensional computing (HDC) paradigm is proposed. Emotion recognition provides valuable information for human–computer interactions; however, the large number of input channels (> 200) and modalities (> 3 ) involved in emotion recognition are significantly expensive from a memory perspective. To address this, methods for memory reduction and optimization are proposed, including a novel approach that takes advantage of the combinatorial nature of the encoding process, and an elementary cellular automaton. HDC with early sensor fusion is implemented alongside the proposed techniques achieving two-class multi-modal classification accuracies of > 76% for valence and > 73% for arousal on the multi-modal AMIGOS and DEAP data sets, almost always better than state of the art. The required vector storage is seamlessly reduced by 98% and the frequency of vector requests by at least 1/5. The results demonstrate the potential of efficient hyperdimensional computing for low-power, multi-channeled emotion recognition tasks.

  4. The present study compares how individuals perceive gradient acoustic realizations of emotion produced by a human voice versus an Amazon Alexa text-to-speech (TTS) voice. We manipulated semantically neutral sentences spoken by both talkers with identical emotional synthesis methods, using three levels of increasing ‘happiness’ (0 %, 33 %, 66% ‘happier’). On each trial, listeners (native speakers of American English, n=99) rated a given sentence on two scales to assess dimensions of emotion: valence (negative-positive) and arousal (calm-excited). Participants also rated the Alexa voice on several parameters to assess anthropomorphism (e.g., naturalness, human-likeness, etc.). Results showed that the emotion manipulations led to increases in perceived positive valence and excitement. Yet, the effect differed by interlocutor: increasing ‘happiness’ manipulations led to larger changes for the human voice than the Alexa voice. Additionally, we observed individual differences in perceived valence/arousal based on participants’ anthropomorphism scores. Overall, this line of research can speak to theories of computer personification and elucidate our changng relationship with voice-AI technology.
  5. Affective studies provide essential insights to address emotion recognition and tracking. In traditional open-loop structures, a lack of knowledge about the internal emotional state makes the system incapable of adjusting stimuli parameters and automatically responding to changes in the brain. To address this issue, we propose to use facial electromyogram measurements as biomarkers to infer the internal hidden brain state as feedback to close the loop. In this research, we develop a systematic way to track and control emotional valence, which codes emotions as being pleasant or obstructive. Hence, we conduct a simulation study by modeling and tracking the subject's emotional valence dynamics using state-space approaches. We employ Bayesian filtering to estimate the person-specific model parameters along with the hidden valence state, using continuous and binary features extracted from experimental electromyogram measurements. Moreover, we utilize a mixed-filter estimator to infer the secluded brain state in a real-time simulation environment. We close the loop with a fuzzy logic controller in two categories of regulation: inhibition and excitation. By designing a control action, we aim to automatically reflect any required adjustments within the simulation and reach the desired emotional state levels. Final results demonstrate that, by making use of physiological data, themore »proposed controller could effectively regulate the estimated valence state. Ultimately, we envision future outcomes of this research to support alternative forms of self-therapy by using wearable machine interface architectures capable of mitigating periods of pervasive emotions and maintaining daily well-being and welfare.« less