Abstract Traditionally, lust and pride have been considered pleasurable, yet sinful in the West. Conversely, guilt is often considered aversive, yet valuable. These emotions illustrate how evaluations about specific emotions and beliefs about their hedonic properties may often diverge. Evaluations about specific emotions may shape important aspects of emotional life (e.g. in emotion regulation, emotion experience and acquisition of emotion concepts). Yet these evaluations are often understudied in affective neuroscience. Prior work in emotion regulation, affective experience, evaluation/attitudes and decision-making point to anterior prefrontal areas as candidates for supporting evaluative emotion knowledge. Thus, we examined the brain areas associated with evaluative and hedonic emotion knowledge, with a focus on the anterior prefrontal cortex. Participants (N = 25) made evaluative and hedonic ratings about emotion knowledge during functional magnetic resonance imaging (fMRI). We found that greater activity in the medial prefrontal cortex (mPFC), ventromedial PFC (vmPFC) and precuneus was associated with an evaluative (vs hedonic) focus on emotion knowledge. Our results suggest that the mPFC and vmPFC, in particular, may play a role in evaluating discrete emotions.
more »
« less
Emergence of Emotion Selectivity in Deep Neural Networks Trained to Recognize Visual Objects
Recent neuroimaging studies have shown that the visual cortex plays an important role in representing the affective significance of visual input. The origin of these affect-specific visual representations is debated: they are intrinsic to the visual system versus they arise through reentry from frontal emotion processing structures such as the amygdala. We examined this problem by combining convolutional neural network (CNN) models of the human ventral visual cortex pre-trained on ImageNet with two datasets of affective images. Our results show that in all layers of the CNN models, there were artificial neurons that responded consistently and selectively to neutral, pleasant, or unpleasant images and lesioning these neurons by setting their output to zero or enhancing these neurons by increasing their gain led to decreased or increased emotion recognition performance respectively. These results support the idea that the visual system may have the intrinsic ability to represent the affective significance of visual input and suggest that CNNs offer a fruitful platform for testing neuroscientific theories.
more »
« less
- PAR ID:
- 10517490
- Editor(s):
- Wei, Xue-Xin
- Publisher / Repository:
- Public Library of Science
- Date Published:
- Journal Name:
- PLOS Computational Biology
- Volume:
- 20
- Issue:
- 3
- ISSN:
- 1553-7358
- Page Range / eLocation ID:
- e1011943
- Subject(s) / Keyword(s):
- Emotion Selectivity, Deep Neural Network
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract The cerebral cortex of primates encompasses multiple anatomically and physiologically distinct areas processing visual information. Areas V1, V2, and V5/MT are conserved across mammals and are central for visual behavior. To facilitate the generation of biologically accurate computational models of primate early visual processing, here we provide an overview of over 350 published studies of these three areas in the genus Macaca, whose visual system provides the closest model for human vision. The literature reports 14 anatomical connection types from the lateral geniculate nucleus of the thalamus to V1 having distinct layers of origin or termination, and 194 connection types between V1, V2, and V5, forming multiple parallel and interacting visual processing streams. Moreover, within V1, there are reports of 286 and 120 types of intrinsic excitatory and inhibitory connections, respectively. Physiologically, tuning of neuronal responses to 11 types of visual stimulus parameters has been consistently reported. Overall, the optimal spatial frequency (SF) of constituent neurons decreases with cortical hierarchy. Moreover, V5 neurons are distinct from neurons in other areas for their higher direction selectivity, higher contrast sensitivity, higher temporal frequency tuning, and wider SF bandwidth. We also discuss currently unavailable data that could be useful for biologically accurate models.more » « less
-
Abstract Processing facial expressions of emotion draws on a distributed brain network. In particular, judging ambiguous facial emotions involves coordination between multiple brain areas. Here, we applied multimodal functional connectivity analysis to achieve network-level understanding of the neural mechanisms underlying perceptual ambiguity in facial expressions. We found directional effective connectivity between the amygdala, dorsomedial prefrontal cortex (dmPFC), and ventromedial PFC, supporting both bottom-up affective processes for ambiguity representation/perception and top-down cognitive processes for ambiguity resolution/decision. Direct recordings from the human neurosurgical patients showed that the responses of amygdala and dmPFC neurons were modulated by the level of emotion ambiguity, and amygdala neurons responded earlier than dmPFC neurons, reflecting the bottom-up process for ambiguity processing. We further found parietal-frontal coherence and delta-alpha cross-frequency coupling involved in encoding emotion ambiguity. We replicated the EEG coherence result using independent experiments and further showed modulation of the coherence. EEG source connectivity revealed that the dmPFC top-down regulated the activities in other brain regions. Lastly, we showed altered behavioral responses in neuropsychiatric patients who may have dysfunctions in amygdala-PFC functional connectivity. Together, using multimodal experimental and analytical approaches, we have delineated a neural network that underlies processing of emotion ambiguity.more » « less
-
Assessments of the mouse visual system based on spatial-frequency analysis imply that its visual capacity is low, with few neurons responding to spatial frequencies greater than 0.5 cycles per degree. However, visually mediated behaviors, such as prey capture, suggest that the mouse visual system is more precise. We introduce a stimulus class—visual flow patterns—that is more like what the mouse would encounter in the natural world than are sine-wave gratings but is more tractable for analysis than are natural images. We used 128-site silicon microelectrodes to measure the simultaneous responses of single neurons in the primary visual cortex (V1) of alert mice. While holding temporal-frequency content fixed, we explored a class of drifting patterns of black or white dots that have energy only at higher spatial frequencies. These flow stimuli evoke strong visually mediated responses well beyond those predicted by spatial-frequency analysis. Flow responses predominate in higher spatial-frequency ranges (0.15–1.6 cycles per degree), many are orientation or direction selective, and flow responses of many neurons depend strongly on sign of contrast. Many cells exhibit distributed responses across our stimulus ensemble. Together, these results challenge conventional linear approaches to visual processing and expand our understanding of the mouse’s visual capacity to behaviorally relevant ranges.more » « less
-
Humans routinely extract important information from images and videos, relying on their gaze. In contrast, computational systems still have difficulty annotating important visual information in a human-like manner, in part because human gaze is often not included in the modeling process. Human input is also particularly relevant for processing and interpreting affective visual information. To address this challenge, we captured human gaze, spoken language, and facial expressions simultaneously in an experiment with visual stimuli characterized by subjective and affective content. Observers described the content of complex emotional images and videos depicting positive and negative scenarios and also their feelings about the imagery being viewed. We explore patterns of these modalities, for example by comparing the affective nature of participant-elicited linguistic tokens with image valence. Additionally, we expand a framework for generating automatic alignments between the gaze and spoken language modalities for visual annotation of images. Multimodal alignment is challenging due to their varying temporal offset. We explore alignment robustness when images have affective content and whether image valence influences alignment results. We also study if word frequency-based filtering impacts results, with both the unfiltered and filtered scenarios performing better than baseline comparisons, and with filtering resulting in a substantial decrease in alignment error rate. We provide visualizations of the resulting annotations from multimodal alignment. This work has implications for areas such as image understanding, media accessibility, and multimodal data fusion.more » « less