skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Culture and Gender Modulate dlPFC Integration in the Emotional Brain: Evidence from Dynamic Caus-al Modeling
Past research has recognized culture and gender variation in the experience of emotion, yet this has not been examined on a level of effective connectivity. To determine culture and gender differences in effec-tive connectivity during emotional experiences, we applied dynamic causal modeling (DCM) to electro-encephalography (EEG) measures of brain activity obtained from Chinese and American participants while they watched emotion-evoking images. Relative to US participants, Chinese participants favored a model bearing a more integrated dorsolateral prefrontal cortex (dlPFC) during fear v. neutral experiences. Meanwhile, relative to males, females favored a model bearing a less integrated dlPFC during fear v. neutral experiences. A culture-gender interaction for winning models was also observed; only US partici-pants showed an effect of gender, with US females favoring a model bearing a less integrated dlPFC compared to the other groups. These findings suggest that emotion and its neural correlates depend in part on the cultural background and gender of an individual. To our knowledge, this is also the first study to apply both DCM and EEG measures in examining culture-gender interaction and emotion.  more » « less
Award ID(s):
1551688
PAR ID:
10320986
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Cognitive neurodynamics
Volume:
17
ISSN:
1871-4080
Page Range / eLocation ID:
153–168
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Situated models of emotion hypothesize that emotions are optimized for the context at hand, but most neuroimaging approaches ignore context. For the first time, we applied Granger causality (GC) analysis to determine how an emotion is affected by a person’s cultural background and situation. Electroencephalographic recordings were obtained from mainland Chinese (CHN) and US participants as they viewed and rated fearful and neutral images displaying either social or non-social contexts. Independent component analysis and GC analysis were applied to determine the epoch of peak effect for each condition and to identify sources and sinks among brain regions of interest. We found that source–sink couplings differed across culture, situation and culture × situation. Mainland CHN participants alone showed preference for an early-onset source–sink pairing with the supramarginal gyrus as a causal source, suggesting that, relative to US participants, CHN participants more strongly prioritized a scene’s social aspects in their response to fearful scenes. Our findings suggest that the neural representation of fear indeed varies according to both culture and situation and their interaction in ways that are consistent with norms instilled by cultural background. 
    more » « less
  2. Abstract Infancy is a sensitive period of development, during which experiences of parental care are particularly important for shaping the developing brain. In a longitudinal study ofN = 95 mothers and infants, we examined links between caregiving behavior (maternal sensitivity observed during a mother–infant free‐play) and infants’ neural response to emotion (happy, angry, and fearful faces) at 5 and 7 months of age. Neural activity was assessed using functional Near‐Infrared Spectroscopy (fNIRS) in the dorsolateral prefrontal cortex (dlPFC), a region involved in cognitive control and emotion regulation. Maternal sensitivity was positively correlated with infants’ neural responses tohappyfaces in the bilateral dlPFC and was associated with relative increases in such responses from 5 to 7 months. Multilevel analyses revealed caregiving‐related individual differences in infants’ neural responses to happy compared to fearful faces in the bilateral dlPFC, as well as other brain regions. We suggest that variability in dlPFC responses to emotion in the developing brain may be one correlate of early experiences of caregiving, with implications for social‐emotional functioning and self‐regulation. Research HighlightsInfancy is a sensitive period of brain development, during which experiences with caregivers are especially important.This study examined links between sensitive maternal care and infants’ neural responses to emotion at 5–7 months of age, using functional near‐infrared spectroscopy (fNIRS).Experiences of sensitive care were associated with infants’ neural responses to emotion—particularly happy faces—in the dorsolateral prefrontal cortex. 
    more » « less
  3. The paper reports ongoing research toward the design of multimodal affective pedagogical agents that are effective for different types of learners and applications. In particular, the work reported in the paper investigated the extent to which the type of character design (realistic versus stylized) affects students’ perception of an animated agent’s facial emotions, and whether the effects are moderated by learner characteristics (e.g. gender). Eighty-two participants viewed 10 animation clips featuring a stylized character exhibiting 5 different emotions, e.g. happiness, sadness, fear, surprise and anger (2 clips per emotion), and 10 clips featuring a realistic character portraying the same emotional states. The participants were asked to name the emotions and rate their sincerity, intensity, and typicality. The results indicated that for recognition, participants were slightly more likely to recognize the emotions displayed by the stylized agent, although the difference was not statistically significant. The stylized agent was on average rated significantly higher for facial emotion intensity, whereas the differences in ratings for typicality and sincerity across all emotions were not statistically significant. A significant difference in ratings was shown in regard to sadness (within typicality), happiness (within sincerity), fear, anger, sadness and happiness (within intensity) with the stylized agent rated higher. Gender was not a significant correlate across all emotions or for individual emotions. 
    more » « less
  4. In this paper, the authors explore different approaches to animating 3D facial emotions, some of which use manual keyframe animation and some of which use machine learning. To compare approaches the authors conducted an experiment consisting of side-by-side comparisons of animation clips generated by skeleton, blendshape, audio-driven, and vision-based capture facial animation techniques. Ninety-five participants viewed twenty face animation clips of characters expressing five distinct emotions (anger, sadness, happiness, fear, neutral), which were created using the four different facial animation techniques. After viewing each clip, the participants were asked to identify the emotions that the characters appeared to be conveying and rate their naturalness. Findings showed that the naturalness ratings of the happy emotion produced by the four methods tended to be consistent, whereas the naturalness ratings of the fear emotion created with skeletal animation were significantly higher than the other methods. Recognition of sad and neutral emotions were very low for all methods as compared to the other emotions. Overall, the skeleton approach had significantly higher ratings for naturalness and higher recognition rate than the other methods. 
    more » « less
  5. Cross-modal effects provide a model framework for investigating hierarchical inter-areal processing, particularly, under conditions where unimodal cortical areas receive contextual feedback from other modalities. Here, using complementary behavioral and brain imaging techniques, we investigated the functional networks participating in face and voice processing during gender perception, a high-level feature of voice and face perception. Within the framework of a signal detection decision model, Maximum likelihood conjoint measurement (MLCM) was used to estimate the contributions of the face and voice to gender comparisons between pairs of audio-visual stimuli in which the face and voice were independently modulated. Top–down contributions were varied by instructing participants to make judgments based on the gender of either the face, the voice or both modalities ( N = 12 for each task). Estimated face and voice contributions to the judgments of the stimulus pairs were not independent; both contributed to all tasks, but their respective weights varied over a 40-fold range due to top–down influences. Models that best described the modal contributions required the inclusion of two different top–down interactions: (i) an interaction that depended on gender congruence across modalities (i.e., difference between face and voice modalities for each stimulus); (ii) an interaction that depended on the within modalities’ gender magnitude. The significance of these interactions was task dependent. Specifically, gender congruence interaction was significant for the face and voice tasks while the gender magnitude interaction was significant for the face and stimulus tasks. Subsequently, we used the same stimuli and related tasks in a functional magnetic resonance imaging (fMRI) paradigm ( N = 12) to explore the neural correlates of these perceptual processes, analyzed with Dynamic Causal Modeling (DCM) and Bayesian Model Selection. Results revealed changes in effective connectivity between the unimodal Fusiform Face Area (FFA) and Temporal Voice Area (TVA) in a fashion that paralleled the face and voice behavioral interactions observed in the psychophysical data. These findings explore the role in perception of multiple unimodal parallel feedback pathways. 
    more » « less