Multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has critically advanced the neuroanatomical understanding of affect processing in the human brain. Central to these advancements is the brain state, a temporally-succinct fMRI-derived pattern of neural activation, which serves as a processing unit. Establishing the brain state’s central role in affect processing, however, requires that it predicts multiple independent measures of affect. We employed MVPA-based regression to predict the valence and arousal properties of visual stimuli sampled from the International Affective Picture System (IAPS) along with the corollary skin conductance response (SCR) for demographically diverse healthy human participants (n = 19). We found that brain states significantly predicted the normative valence and arousal scores of the stimuli as well as the attendant individual SCRs. In contrast, SCRs significantly predicted arousal only. The prediction effect size of the brain state was more than three times greater than that of SCR. Moreover, neuroanatomical analysis of the regression parameters found remarkable agreement with regions long-established by fMRI univariate analyses in the emotion processing literature. Finally, geometric analysis of these parameters also found that the neuroanatomical encodings of valence and arousal are orthogonal as originally posited by the circumplex model of dimensional emotion.
more »
« less
Predicting image memorability from evoked feelings
While viewing a visual stimulus, we often cannot tell whether it is inherently memorable or forgettable. However, the memorability of a stimulus can be quantified and partially predicted by a collection of conceptual and perceptual factors. Higher-level properties that represent the “meaningfulness” of a visual stimulus to viewers best predict whether it will be remembered or forgotten across a population. Here, we hypothesize that the feelings evoked by an image, operationalized as the valence and arousal dimensions of affect, significantly contribute to the memorability of scene images. We ran two complementary experiments to investigate the influence of affect on scene memorability, in the process creating a new image set (VAMOS) of hundreds of natural scene images for which we obtained valence, arousal, and memorability scores. From our first experiment, we found memorability to be highly reliable for scene images that span a wide range of evoked arousal and valence. From our second experiment, we found that both valence and arousal are significant but weak predictors of image memorability. Scene images were most memorable if they were slightly negatively valenced and highly arousing. Images that were extremely positive or unarousing were most forgettable. Valence and arousal together accounted for less than 8% of the variance in image memorability. These findings suggest that evoked affect contributes to the overall memorability of a scene image but, like other singular predictors, does not fully explain it. Instead, memorability is best explained by an assemblage of visual features that combine, in perhaps unintuitive ways, to predict what is likely to stick in our memory.
more »
« less
- PAR ID:
- 10566585
- Publisher / Repository:
- Behavior Research Methods
- Date Published:
- Journal Name:
- Behavior Research Methods
- Volume:
- 57
- Issue:
- 1
- ISSN:
- 1554-3528
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Patterns of estimated neural activity derived from resting state functional magnetic resonance imaging (rs-fMRI) have been shown to predict a wide range of cognitive and behavioral outcomes in both normative and clinical populations. Yet, without links to established cognitive processes, the functional brain states associated with the resting brain will remain unexplained, and potentially confounded, markers of individual differences. In this work we demonstrate the application of multivoxel pattern classifiers (MVPCs) to predict the valence and arousal properties of spontaneous affect processing in the task-non-engaged resting state. rs-fMRI data were acquired from subjects that were held out from a subject set that underwent image-based affect induction concurrent with fMRI to train the MVPCs. We also validated these affective predictions against a well-established, independent measure of autonomic arousal, skin conductance response. These findings suggest a new neuroimaging methodology for resting state analysis in which models, trained on cognition-specific task-based fMRI acquired from well-matched cohorts, capably predict hidden cognitive processes operating within the resting brain.more » « less
-
Music listening can evoke a range of extra-musical thoughts, from colors and smells to autobiographical memories and fictional stories. We investigated music-evoked thoughts as an overarching category, to examine how the music’s genre and emotional expression, as well as familiarity with the style and liking of individual excerpts, predicted the occurrence, type, novelty, and valence of thoughts. We selected 24 unfamiliar, instrumental music excerpts evenly distributed across three genres (classical, electronic, pop/rock) and two levels of expressed valence (positive, negative) and arousal (high, low). UK participants (N = 148, Mage = 28.68) heard these 30-second excerpts, described any thoughts that had occurred while listening, and rated various features of the thoughts and music. The occurrence and type of thoughts varied across genres, with classical and electronic excerpts evoking more thoughts than pop/rock excerpts. Classical excerpts evoked more music-related thoughts, fictional stories, and media-related memories, while electronic music evoked more abstract visual images than the other genres. Positively valenced music and more liked excerpts elicited more positive thought content. Liking and familiarity with a style also increased thought occurrence, while familiarity decreased the novelty of thought content. These findings have key implications for understanding how music impacts imagination and creative processes.more » « less
-
Does emphasizing the role of people in engineering influence the memorability of engineering content? This study is part of a larger project through which our team developed a new undergraduate energy course to better reflect students’ cultures and lived experiences through asset-based pedagogies to help students develop a sociotechnical mindset in engineering problem solving. In this study, students in the class were invited to participate in semi-structured interviews (n=5) to explore our effectiveness in helping them develop a sociotechnical mindset around energy issues and conceptualize engineering as a sociotechnical endeavor. This study focuses on an activity during the interview where the participants were asked to sort a variety of images associated with class learning experiences along a spectrum of least to most memorable. Emergent themes from students’ responses revolved around learning experiences that included global perspectives and emphasized a “who” (i.e., whose problems, who is impacted by engineering, and what type of engineers the students will choose to become) as the most memorable. Our results indicate that students found the sociotechnical aspects of the course more memorable than the traditional canonical engineering content. These findings suggest that framing engineering content as sociotechnical can be one strategy to increase student engagement, increase memorability of lessons, and help students to think more deeply about their own goals as future engineers.more » « less
-
Humans routinely extract important information from images and videos, relying on their gaze. In contrast, computational systems still have difficulty annotating important visual information in a human-like manner, in part because human gaze is often not included in the modeling process. Human input is also particularly relevant for processing and interpreting affective visual information. To address this challenge, we captured human gaze, spoken language, and facial expressions simultaneously in an experiment with visual stimuli characterized by subjective and affective content. Observers described the content of complex emotional images and videos depicting positive and negative scenarios and also their feelings about the imagery being viewed. We explore patterns of these modalities, for example by comparing the affective nature of participant-elicited linguistic tokens with image valence. Additionally, we expand a framework for generating automatic alignments between the gaze and spoken language modalities for visual annotation of images. Multimodal alignment is challenging due to their varying temporal offset. We explore alignment robustness when images have affective content and whether image valence influences alignment results. We also study if word frequency-based filtering impacts results, with both the unfiltered and filtered scenarios performing better than baseline comparisons, and with filtering resulting in a substantial decrease in alignment error rate. We provide visualizations of the resulting annotations from multimodal alignment. This work has implications for areas such as image understanding, media accessibility, and multimodal data fusion.more » « less
An official website of the United States government

