Distinct lines of research in both humans and animals point to a specific role of the hippocampus in both spatial and episodic memory function. The discovery of concept cells in the hippocampus and surrounding medial temporal lobe (MTL) regions suggests that the MTL maps physical and semantic spaces with a similar neural architecture. Here, we studied the emergence of such maps using MTL microwire recordings from 20 patients (9 female, 11 male) navigating a virtual environment featuring salient landmarks with established semantic meaning. We present several key findings. The array of local field potentials in the MTL contains sufficient information for above-chance decoding of subjects' instantaneous location in the environment. Closer examination revealed that as subjects gain experience with the environment the field potentials come to represent both the subjects' locations in virtual space and in high-dimensional semantic space. Similarly, we observe a learning effect on temporal sequence coding. Over time, field potentials come to represent future locations, even after controlling for spatial proximity. This predictive coding of future states, more so than the strength of spatial representations per se, is linked to variability in subjects' navigation performance. Our results thus support the conceptualization of the MTL as a memory space, representing both spatial- and nonspatial information to plan future actions and predict their outcomes.
The human medial temporal lobe (MTL) plays a crucial role in recognizing visual objects, a key cognitive function that relies on the formation of semantic representations. Nonetheless, it remains unknown how visual information of general objects is translated into semantic representations in the MTL. Furthermore, the debate about whether the human MTL is involved in perception has endured for a long time. To address these questions, we investigated three distinct models of neural object coding—semantic coding, axis-based feature coding, and region-based feature coding—in each subregion of the MTL, using high-resolution fMRI in two male and six female participants. Our findings revealed the presence of semantic coding throughout the MTL, with a higher prevalence observed in the parahippocampal cortex (PHC) and perirhinal cortex (PRC), while axis coding and region coding were primarily observed in the earlier regions of the MTL. Moreover, we demonstrated that voxels exhibiting axis coding supported the transition to region coding and contained information relevant to semantic coding. Together, by providing a detailed characterization of neural object coding schemes and offering a comprehensive summary of visual coding information for each MTL subregion, our results not only emphasize a clear role of the MTL in perceptual processing but also shed light on the translation of perception-driven representations of visual features into memory-driven representations of semantics along the MTL processing pathway.
- NSF-PAR ID:
- 10493492
- Publisher / Repository:
- DOI PREFIX: 10.1523
- Date Published:
- Journal Name:
- The Journal of Neuroscience
- ISSN:
- 0270-6474
- Format(s):
- Medium: X Size: Article No. e2135232024
- Size(s):
- Article No. e2135232024
- Sponsoring Org:
- National Science Foundation
More Like this
-
SIGNIFICANCE STATEMENT Using rare microwire recordings, we studied the representation of spatial, semantic, and temporal information in the human MTL. Our findings demonstrate that subjects acquire a cognitive map that simultaneously represents the spatial and semantic relations between landmarks. We further show that the same learned representation is used to predict future states, implicating MTL cell assemblies as the building blocks of prospective memory functions. -
Abstract Introduction How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge‐based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.
Methods Participants (
N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole‐brain analysis.Results Somewhat surprisingly, within higher‐order visual processing regions—specifically lateral occipital cortex (LOC)—we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category‐selective regions. Adaptation was seen a whole‐brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.
Conclusions Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category‐relevant visual features are maintained in higher‐order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.
-
The medial temporal lobe (MTL) is traditionally considered to be a system that is specialized for long-term memory. Recent work has challenged this notion by demonstrating that this region can contribute to many domains of cognition beyond long-term memory, including perception and attention. One potential reason why the MTL (and hippocampus specifically) contributes broadly to cognition is that it contains relational representations—representations of multidimensional features of experience and their unique relationship to one another—that are useful in many different cognitive domains. Here, we explore the hypothesis that the hippocampus/MTL plays a critical role in attention and perception via relational representations. We compared human participants with MTL damage to healthy age- and education-matched individuals on attention tasks that varied in relational processing demands. On each trial, participants viewed two images (rooms with paintings). On “similar room” trials, they judged whether the rooms had the same spatial layout from a different perspective. On “similar art” trials, they judged whether the paintings could have been painted by the same artist. On “identical” trials, participants simply had to detect identical paintings or rooms. MTL lesion patients were significantly and selectively impaired on the similar room task. This work provides further evidence that the hippocampus/MTL plays a ubiquitous role in cognition by virtue of its relational and spatial representations and highlights its important contributions to rapid perceptual processes that benefit from attention.more » « less
-
Abstract Orientation selectivity in primate visual cortex is organized into cortical columns. Since cortical columns are at a finer spatial scale than the sampling resolution of standard BOLD fMRI measurements, analysis approaches have been proposed to peer past these spatial resolution limitations. It was recently found that these methods are predominantly sensitive to stimulus vignetting - a form of selectivity arising from an interaction of the oriented stimulus with the aperture edge. Beyond vignetting, it is not clear whether orientation-selective neural responses are detectable in BOLD measurements. Here, we leverage a dataset of visual cortical responses measured using high-field 7T fMRI. Fitting these responses using image-computable models, we compensate for vignetting and nonetheless find reliable tuning for orientation. Results further reveal a coarse-scale map of orientation preference that may constitute the neural basis for known perceptual anisotropies. These findings settle a long-standing debate in human neuroscience, and provide insights into functional organization principles of visual cortex.more » « less
-
According to the efficient coding hypothesis, neural populations encode information optimally when representations are high-dimensional and uncorrelated. However, such codes may carry a cost in terms of generalization and robustness. Past empirical studies of early visual cortex (V1) in rodents have suggested that this tradeoff indeed constrains sensory representations. However, it remains unclear whether these insights generalize across the hierarchy of the human visual system, and particularly to object representations in high-level occipitotemporal cortex (OTC). To gain new empirical clarity, here we develop a family of object recognition models with parametrically varying dropout proportion , which induces systematically varying dimensionality of internal responses (while controlling all other inductive biases). We find that increasing dropout produces an increasingly smooth, low-dimensional representational space. Optimal robustness to lesioning is observed at around 70% dropout, after which both accuracy and robustness decline. Representational comparison to large-scale 7T fMRI data from occipitotemporal cortex in the Natural Scenes Dataset reveals that this optimal degree of dropout is also associated with maximal emergent neural predictivity. Finally, using new techniques for achieving denoised estimates of the eigenspectrum of human fMRI responses, we compare the rate of eigenspectrum decay between model and brain feature spaces. We observe that the match between model and brain representations is associated with a common balance between efficiency and robustness in the representational space. These results suggest that varying dropout may reveal an optimal point of balance between the efficiency of high-dimensional codes and the robustness of low dimensional codes in hierarchical vision systems.more » « less