Abstract High-resolution awake mouse fMRI remains challenging despite extensive efforts to address motion-induced artifacts and stress. This study introduces an implantable radiofrequency (RF) surface coil design that minimizes image distortion caused by the air/tissue interface of mouse brains while simultaneously serving as a headpost for fixation during scanning. Using a 14T scanner, high-resolution fMRI enabled brain-wide functional mapping of visual and vibrissa stimulation at 100x100x200µm resolution with a 2s per frame sampling rate. Besides activated ascending visual and vibrissa pathways, robust BOLD responses were detected in the anterior cingulate cortex upon visual stimulation and spread through the ventral retrosplenial area (VRA) with vibrissa air-puff stimulation, demonstrating higher-order sensory processing in association cortices of awake mice. In particular, the rapid hemodynamic responses in VRA upon vibrissa stimulation showed a strong correlation with the hippocampus, thalamus, and prefrontal cortical areas. Cross-correlation analysis with designated VRA responses revealed early positive BOLD signals at the contralateral barrel cortex (BC) occurring 2 seconds prior to the air-puff in awake mice with repetitive stimulation, which was not detectable with the randomized stimulation paradigm. This early BC activation indicated learned anticipation through the vibrissa system and association cortices in awake mice under continuous training of repetitive air-puff stimulation. This work establishes a high-resolution awake mouse fMRI platform, enabling brain-wide functional mapping of sensory signal processing in higher association cortical areas. Significance StatementThis awake mouse fMRI platform was developed by implementing an advanced implantable radiofrequency (RF) coil scheme, which simultaneously served as a headpost to secure the mouse head during scanning. The ultra-high spatial resolution (100x100x200µm) BOLD fMRI enabled the brain-wide mapping of activated visual and vibrissa systems during sensory stimulation in awake mice, including association cortices, e.g. anterior cingulate cortex and retrosplenial cortex, for high order sensory processing. Also, the activation of barrel cortex at 2 s prior to the air-puff indicated a learned anticipation of awake mice under continuous training of the repetitive vibrissa stimulation.
more »
« less
Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex
To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex. SIGNIFICANCE STATEMENTAs we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.
more »
« less
- Award ID(s):
- 2144700
- PAR ID:
- 10511392
- Publisher / Repository:
- The Journal of Neuroscience
- Date Published:
- Journal Name:
- The Journal of Neuroscience
- Volume:
- 43
- Issue:
- 31
- ISSN:
- 0270-6474
- Page Range / eLocation ID:
- 5723 to 5737
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Each view of our environment captures only a subset of our immersive surroundings. Yet, our visual experience feels seamless. A puzzle for human neuroscience is to determine what cognitive mechanisms enable us to overcome our limited field of view and efficiently anticipate new views as we sample our visual surroundings. Here, we tested whether memory-based predictions of upcoming scene views facilitate efficient perceptual judgments across head turns. We tested this hypothesis using immersive, head-mounted virtual reality (VR). After learning a set of immersive real-world environments, participants (n = 101 across 4 experiments) were briefly primed with a single view from a studied environment and then turned left or right to make a perceptual judgment about an adjacent scene view. We found that participants’ perceptual judgments were faster when they were primed with images from the same (vs. neutral or different) environments. Importantly, priming required memory: it only occurred in learned (vs. novel) environments, where the link between adjacent scene views was known. Further, consistent with a role in supporting active vision, priming only occurred in the direction of planned head turns and only benefited judgments for scene views presented in their learned spatiotopic positions. Taken together, we propose that memory-based predictions facilitate rapid perception across large-scale visual actions, such as head and body movements, and may be critical for efficient behavior in complex immersive environments.more » « less
-
Category selectivity is a fundamental principle of organization of perceptual brain regions. Human occipitotemporal cortex is subdivided into areas that respond preferentially to faces, bodies, artifacts, and scenes. However, observers need to combine information about objects from different categories to form a coherent understanding of the world. How is this multicategory information encoded in the brain? Studying the multivariate interactions between brain regions of male and female human subjects with fMRI and artificial neural networks, we found that the angular gyrus shows joint statistical dependence with multiple category-selective regions. Adjacent regions show effects for the combination of scenes and each other category, suggesting that scenes provide a context to combine information about the world. Additional analyses revealed a cortical map of areas that encode information across different subsets of categories, indicating that multicategory information is not encoded in a single centralized location, but in multiple distinct brain regions. SIGNIFICANCE STATEMENTMany cognitive tasks require combining information about entities from different categories. However, visual information about different categorical objects is processed by separate, specialized brain regions. How is the joint representation from multiple category-selective regions implemented in the brain? Using fMRI movie data and state-of-the-art multivariate statistical dependence based on artificial neural networks, we identified the angular gyrus encoding responses across face-, body-, artifact-, and scene-selective regions. Further, we showed a cortical map of areas that encode information across different subsets of categories. These findings suggest that multicategory information is not encoded in a single centralized location, but at multiple cortical sites which might contribute to distinct cognitive functions, offering insights to understand integration in a variety of domains.more » « less
-
High-resolution awake mouse functional magnetic resonance imaging (fMRI) remains challenging despite extensive efforts to address motion-induced artifacts and stress. This study introduces an implantable radio frequency (RF) surface coil design that minimizes image distortion caused by the air/tissue interface of mouse brains while simultaneously serving as a headpost for fixation during scanning. Furthermore, this study provides a thorough acclimation method used to accustom animals to the MRI environment minimizing motion-induced artifacts. Using a 14 T scanner, high-resolution fMRI enabled brain-wide functional mapping of visual and vibrissa stimulation at 100 µm×100 µm×200 µm resolution with a 2 s per frame sampling rate. Besides activated ascending visual and vibrissa pathways, robust blood oxygen level-dependent (BOLD) responses were detected in the anterior cingulate cortex upon visual stimulation and spread through the ventral retrosplenial area (VRA) with vibrissa air-puff stimulation, demonstrating higher-order sensory processing in association cortices of awake mice. In particular, the rapid hemodynamic responses in VRA upon vibrissa stimulation showed a strong correlation with the hippocampus, thalamus, and prefrontal cortical areas. Cross-correlation analysis with designated VRA responses revealed early positive BOLD signals at the contralateral barrel cortex (BC) occurring 2 s prior to the air-puff in awake mice with repetitive stimulation, which was not detected using a randomized stimulation paradigm. This early BC activation indicated a learned anticipation through the vibrissa system and association cortices in awake mice under continuous exposure of repetitive air-puff stimulation. This work establishes a high-resolution awake mouse fMRI platform, enabling brain-wide functional mapping of sensory signal processing in higher association cortical areas.more » « less
-
The human medial temporal lobe (MTL) plays a crucial role in recognizing visual objects, a key cognitive function that relies on the formation of semantic representations. Nonetheless, it remains unknown how visual information of general objects is translated into semantic representations in the MTL. Furthermore, the debate about whether the human MTL is involved in perception has endured for a long time. To address these questions, we investigated three distinct models of neural object coding—semantic coding, axis-based feature coding, and region-based feature coding—in each subregion of the MTL, using high-resolution fMRI in two male and six female participants. Our findings revealed the presence of semantic coding throughout the MTL, with a higher prevalence observed in the parahippocampal cortex (PHC) and perirhinal cortex (PRC), while axis coding and region coding were primarily observed in the earlier regions of the MTL. Moreover, we demonstrated that voxels exhibiting axis coding supported the transition to region coding and contained information relevant to semantic coding. Together, by providing a detailed characterization of neural object coding schemes and offering a comprehensive summary of visual coding information for each MTL subregion, our results not only emphasize a clear role of the MTL in perceptual processing but also shed light on the translation of perception-driven representations of visual features into memory-driven representations of semantics along the MTL processing pathway. Significance StatementIn this study, we delved into the mechanisms underlying visual object recognition within the human medial temporal lobe (MTL), a pivotal region known for its role in the formation of semantic representations crucial for memory. In particular, the translation of visual information into semantic representations within the MTL has remained unclear, and the enduring debate regarding the involvement of the human MTL in perception has persisted. To address these questions, we comprehensively examined distinct neural object coding models across each subregion of the MTL, leveraging high-resolution fMRI. We also showed transition of information between object coding models and across MTL subregions. Our findings significantly contributes to advancing our understanding of the intricate pathway involved in visual object coding.more » « less
An official website of the United States government

