Research at the intersection of computer vision and neuroscience has revealed hierarchical correspondence between layers of deep convolutional neural networks (DCNNs) and cascade of regions along human ventral visual cortex. Recently, studies have uncovered emergence of human interpretable concepts within DCNNs layers trained to identify visual objects and scenes. Here, we asked whether an artificial neural network (with convolutional structure) trained for visual categorization would demonstrate spatial correspondences with human brain regions showing central/peripheral biases. Using representational similarity analysis, we compared activations of convolutional layers of a DCNN trained for object and scene categorization with neural representations in human brain visual regions. Results reveal a brain-like topographical organization in the layers of the DCNN, such that activations of layer-units with central-bias were associated with brain regions with foveal tendencies (e.g. fusiform gyrus), and activations of layer-units with selectivity for image backgrounds were associated with cortical regions showing peripheral preference (e.g. parahippocampal cortex). The emergence of a categorical topographical correspondence between DCNNs and brain regions suggests these models are a good approximation of the perceptual representation generated by biological neural networks.
This content will become publicly available on January 16, 2025
- Award ID(s):
- 1942438
- NSF-PAR ID:
- 10510890
- Publisher / Repository:
- Open Review
- Date Published:
- Journal Name:
- International Conference on Learning Representations
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Neuroimaging studies of human memory have consistently found that univariate responses in parietal cortex track episodic experience with stimuli (whether stimuli are 'old' or 'new'). More recently, pattern-based fMRI studies have shown that parietal cortex also carries information about the semantic content of remembered experiences. However, it is not well understood how memory-based and content-based signals are integrated within parietal cortex. Here, in humans (males and females), we used voxel-wise encoding models and a recognition memory task to predict the fMRI activity patterns evoked by complex natural scene images based on (1) the episodic history and (2) the semantic content of each image. Models were generated and compared across distinct subregions of parietal cortex and for occipitotemporal cortex. We show that parietal and occipitotemporal regions each encode memory and content information, but they differ in how they combine this information. Among parietal subregions, angular gyrus was characterized by robust and overlapping effects of memory and content. Moreover, subject-specific semantic tuning functions revealed that successful recognition shifted the amplitude of tuning functions in angular gyrus but did not change the selectivity of tuning. In other words, effects of memory and content were additive in angular gyrus. This pattern of data contrasted with occipitotemporal cortex where memory and content effects were interactive: memory effects were preferentially expressed by voxels tuned to the content of a remembered image. Collectively, these findings provide unique insight into how parietal cortex combines information about episodic memory and semantic content.
SIGNIFICANCE STATEMENT Neuroimaging studies of human memory have identified multiple brain regions that not only carry information about “whether” a visual stimulus is successfully recognized but also “what” the content of that stimulus includes. However, a fundamental and open question concerns how the brain integrates these two types of information (memory and content). Here, using a powerful combination of fMRI analysis methods, we show that parietal cortex, particularly the angular gyrus, robustly combines memory- and content-related information, but these two forms of information are represented via additive, independent signals. In contrast, memory effects in high-level visual cortex critically depend on (and interact with) content representations. Together, these findings reveal multiple and distinct ways in which the brain combines memory- and content-related information. -
A fundamental principle of neural representation is to minimize wiring length by spatially organizing neurons according to the frequency of their communication [Sterling and Laughlin, 2015]. A consequence is that nearby regions of the brain tend to represent similar content. This has been explored in the context of the visual cortex in recent works [Doshi and Konkle, 2023, Tong et al., 2023]. Here, we use the notion of cortical distance as a baseline to ground, evaluate, and interpret measures of representational distance. We compare several popular methods—both second-order methods (Representational Similarity Analysis, Centered Kernel Alignment) and first-order methods (Shape Metrics)—and calculate how well the representational distance reflects 2D anatomical distance along the visual cortex (the anatomical stress score). We evaluate these metrics on a large-scale fMRI dataset of human ventral visual cortex [Allen et al., 2022b], and observe that the 3 types of Shape Metrics produce representational-anatomical stress scores with the smallest variance across subjects, (Z score = -1.5), which suggests that first-order representational scores quantify the relationship between representational and cortical geometry in a way that is more invariant across different subjects. Our work establishes a criterion with which to compare methods for quantifying representational similarity with implications for studying the anatomical organization of high-level ventral visual cortex.more » « less
-
Abstract Introduction How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge‐based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.
Methods Participants (
N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole‐brain analysis.Results Somewhat surprisingly, within higher‐order visual processing regions—specifically lateral occipital cortex (LOC)—we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category‐selective regions. Adaptation was seen a whole‐brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.
Conclusions Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category‐relevant visual features are maintained in higher‐order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.
-
Representational geometry and connectivity-based studies offer complementary insights into neural information processing, but it is unclear how representations and networks interact to generate neural information. Using a multi-task fMRI dataset, we investigate the role of intrinsic connectivity in shaping diverse representational geometries across the human cortex. Activity flow modeling, which generates neural activity based on connectivity-weighted propagation from other regions, successfully recreated similarity structure and a compression-then-expansion pattern of task representation dimensionality. We introduce a novel measure, convergence, quantifying the degree to which connectivity converges onto target regions. As hypothesized, convergence corresponded with compression of representations and helped explain the observed compression-then-expansion pattern of task representation dimensionality along the cortical hierarchy. These results underscore the generative role of intrinsic connectivity in sculpting representational geometries and suggest that structured connectivity properties, such as convergence, contribute to representational transformations. By bridging representational geometry and connectivity-based frameworks, this work offers a more unified understanding of neural information processing and the computational relevance of brain architecture.more » « less