skip to main content


Title: Parts‐based representations of perceived face movements in the superior temporal sulcus
Abstract

Facial motion is a primary source of social information about other humans. Prior fMRI studies have identified regions of the superior temporal sulcus (STS) that respond specifically to perceived face movements (termed fSTS), but little is known about the nature of motion representations in these regions. Here we use fMRI and multivoxel pattern analysis to characterize the representational content of the fSTS. Participants viewed a set of specific eye and mouth movements, as well as combined eye and mouth movements. Our results demonstrate that fSTS response patterns contain information about face movements, including subtle distinctions between types of eye and mouth movements. These representations generalize across the actor performing the movement, and across small differences in visual position. Critically, patterns of response to combined movements could be well predicted by linear combinations of responses to individual eye and mouth movements, pointing to a parts‐based representation of complex face movements. These results indicate that the fSTS plays an intermediate role in the process of inferring social content from visually perceived face movements, containing a representation that is sufficiently abstract to generalize across low‐level visual details, but still tied to the kinematics of face part movements.

 
more » « less
NSF-PAR ID:
10460269
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Human Brain Mapping
Volume:
40
Issue:
8
ISSN:
1065-9471
Page Range / eLocation ID:
p. 2499-2510
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Saccadic eye movements (saccades) disrupt the continuous flow of visual information, yet our perception of the visual world remains uninterrupted. Here we assess the representation of the visual scene across saccades from single-trial spike trains of extrastriate visual areas, using a combined electrophysiology and statistical modeling approach. Using a model-based decoder we generate a high temporal resolution readout of visual information, and identify the specific changes in neurons’ spatiotemporal sensitivity that underly an integrated perisaccadic representation of visual space. Our results show that by maintaining a memory of the visual scene, extrastriate neurons produce an uninterrupted representation of the visual world. Extrastriate neurons exhibit a late response enhancement close to the time of saccade onset, which preserves the latest pre-saccadic information until the post-saccadic flow of retinal information resumes. These results show how our brain exploits available information to maintain a representation of the scene while visual inputs are disrupted.

     
    more » « less
  2. Abstract

    Faces are salient social stimuli that attract a stereotypical pattern of eye movement. The human amygdala and hippocampus are involved in various aspects of face processing; however, it remains unclear how they encode the content of fixations when viewing faces. To answer this question, we employed single-neuron recordings with simultaneous eye tracking when participants viewed natural face stimuli. We found a class of neurons in the human amygdala and hippocampus that encoded salient facial features such as the eyes and mouth. With a control experiment using non-face stimuli, we further showed that feature selectivity was specific to faces. We also found another population of neurons that differentiated saccades to the eyes vs. the mouth. Population decoding confirmed our results and further revealed the temporal dynamics of face feature coding. Interestingly, we found that the amygdala and hippocampus played different roles in encoding facial features. Lastly, we revealed two functional roles of feature-selective neurons: 1) they encoded the salient region for face recognition, and 2) they were related to perceived social trait judgments. Together, our results link eye movement with neural face processing and provide important mechanistic insights for human face perception.

     
    more » « less
  3. Neuroimaging studies of human memory have consistently found that univariate responses in parietal cortex track episodic experience with stimuli (whether stimuli are 'old' or 'new'). More recently, pattern-based fMRI studies have shown that parietal cortex also carries information about the semantic content of remembered experiences. However, it is not well understood how memory-based and content-based signals are integrated within parietal cortex. Here, in humans (males and females), we used voxel-wise encoding models and a recognition memory task to predict the fMRI activity patterns evoked by complex natural scene images based on (1) the episodic history and (2) the semantic content of each image. Models were generated and compared across distinct subregions of parietal cortex and for occipitotemporal cortex. We show that parietal and occipitotemporal regions each encode memory and content information, but they differ in how they combine this information. Among parietal subregions, angular gyrus was characterized by robust and overlapping effects of memory and content. Moreover, subject-specific semantic tuning functions revealed that successful recognition shifted the amplitude of tuning functions in angular gyrus but did not change the selectivity of tuning. In other words, effects of memory and content were additive in angular gyrus. This pattern of data contrasted with occipitotemporal cortex where memory and content effects were interactive: memory effects were preferentially expressed by voxels tuned to the content of a remembered image. Collectively, these findings provide unique insight into how parietal cortex combines information about episodic memory and semantic content.

    SIGNIFICANCE STATEMENTNeuroimaging studies of human memory have identified multiple brain regions that not only carry information about “whether” a visual stimulus is successfully recognized but also “what” the content of that stimulus includes. However, a fundamental and open question concerns how the brain integrates these two types of information (memory and content). Here, using a powerful combination of fMRI analysis methods, we show that parietal cortex, particularly the angular gyrus, robustly combines memory- and content-related information, but these two forms of information are represented via additive, independent signals. In contrast, memory effects in high-level visual cortex critically depend on (and interact with) content representations. Together, these findings reveal multiple and distinct ways in which the brain combines memory- and content-related information.

     
    more » « less
  4. Category selectivity is a fundamental principle of organization of perceptual brain regions. Human occipitotemporal cortex is subdivided into areas that respond preferentially to faces, bodies, artifacts, and scenes. However, observers need to combine information about objects from different categories to form a coherent understanding of the world. How is this multicategory information encoded in the brain? Studying the multivariate interactions between brain regions of male and female human subjects with fMRI and artificial neural networks, we found that the angular gyrus shows joint statistical dependence with multiple category-selective regions. Adjacent regions show effects for the combination of scenes and each other category, suggesting that scenes provide a context to combine information about the world. Additional analyses revealed a cortical map of areas that encode information across different subsets of categories, indicating that multicategory information is not encoded in a single centralized location, but in multiple distinct brain regions.

    SIGNIFICANCE STATEMENTMany cognitive tasks require combining information about entities from different categories. However, visual information about different categorical objects is processed by separate, specialized brain regions. How is the joint representation from multiple category-selective regions implemented in the brain? Using fMRI movie data and state-of-the-art multivariate statistical dependence based on artificial neural networks, we identified the angular gyrus encoding responses across face-, body-, artifact-, and scene-selective regions. Further, we showed a cortical map of areas that encode information across different subsets of categories. These findings suggest that multicategory information is not encoded in a single centralized location, but at multiple cortical sites which might contribute to distinct cognitive functions, offering insights to understand integration in a variety of domains. 

    more » « less
  5. Abstract Introduction

    How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge‐based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.

    Methods

    Participants (N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole‐brain analysis.

    Results

    Somewhat surprisingly, within higher‐order visual processing regions—specifically lateral occipital cortex (LOC)—we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category‐selective regions. Adaptation was seen a whole‐brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.

    Conclusions

    Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category‐relevant visual features are maintained in higher‐order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.

     
    more » « less