When searching for an object in a cluttered scene, we can use our memory of the target object features to guide our search, and the responses of neurons in multiple cortical visual areas are enhanced when their receptive field contains a stimulus sharing target object features. Here we tested the role of the ventral prearcuate region (VPA) of prefrontal cortex in the control of feature attention in cortical visual area V4. VPA was unilaterally inactivated in monkeys performing a free-viewing visual search for a target stimulus in an array of stimuli, impairing monkeys’ ability to find the target in the array in the affected hemifield, but leaving intact their ability to make saccades to targets presented alone. Simultaneous recordings in V4 revealed that the effects of feature attention on V4 responses were eliminated or greatly reduced while leaving the effects of spatial attention on responses intact. Altogether, the results suggest that feedback from VPA modulates processing in visual cortex during attention to object features.
Saccadic eye movements (saccades) disrupt the continuous flow of visual information, yet our perception of the visual world remains uninterrupted. Here we assess the representation of the visual scene across saccades from single-trial spike trains of extrastriate visual areas, using a combined electrophysiology and statistical modeling approach. Using a model-based decoder we generate a high temporal resolution readout of visual information, and identify the specific changes in neurons’ spatiotemporal sensitivity that underly an integrated perisaccadic representation of visual space. Our results show that by maintaining a memory of the visual scene, extrastriate neurons produce an uninterrupted representation of the visual world. Extrastriate neurons exhibit a late response enhancement close to the time of saccade onset, which preserves the latest pre-saccadic information until the post-saccadic flow of retinal information resumes. These results show how our brain exploits available information to maintain a representation of the scene while visual inputs are disrupted.
more » « less- NSF-PAR ID:
- 10305708
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Nature Communications
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2041-1723
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Higher‐order visual thalamus communicates broadly and bi‐directionally with primary and extrastriate cortical areas in various mammals. In primates, the pulvinar is a topographically and functionally organized thalamic nucleus that is largely dedicated to visual processing. Still, a more granular connectivity map is needed to understand the role of thalamocortical loops in visually guided behavior. Similarly, the secondary visual thalamic nucleus in mice (the lateral posterior nucleus, LP) has extensive connections with cortex. To resolve the precise connectivity of these circuits, we first mapped mouse visual cortical areas using intrinsic signal optical imaging and then injected fluorescently tagged retrograde tracers (cholera toxin subunit B) into retinotopically‐matched locations in various combinations of seven different visual areas. We find that LP neurons representing matched regions in visual space but projecting to different extrastriate areas are found in different topographically organized zones, with few double‐labeled cells (~4–6%). In addition, V1 and extrastriate visual areas received input from the ventrolateral part of the laterodorsal nucleus of the thalamus (LDVL). These observations indicate that the thalamus provides topographically organized circuits to each mouse visual area and raise new questions about the contributions from LP and LDVL to cortical activity.
-
Abstract Spatial cognition depends on an accurate representation of orientation within an environment. Head direction cells in distributed brain regions receive a range of sensory inputs, but visual input is particularly important for aligning their responses to environmental landmarks. To investigate how population-level heading responses are aligned to visual input, we recorded from retrosplenial cortex (RSC) of head-fixed mice in a moving environment using two-photon calcium imaging. We show that RSC neurons are tuned to the animal’s relative orientation in the environment, even in the absence of head movement. Next, we found that RSC receives functionally distinct projections from visual and thalamic areas and contains several functional classes of neurons. While some functional classes mirror RSC inputs, a newly discovered class coregisters visual and thalamic signals. Finally, decoding analyses reveal unique contributions to heading from each class. Our results suggest an RSC circuit for anchoring heading representations to environmental visual landmarks.
-
Abstract Scene memory has known spatial biases. Boundary extension is a well-known bias whereby observers remember visual information beyond an image’s boundaries. While recent studies demonstrate that boundary contraction also reliably occurs based on intrinsic image properties, the specific properties that drive the effect are unknown. This study assesses the extent to which scene memory might have a fixed capacity for information. We assessed both visual and semantic information in a scene database using techniques from image processing and natural language processing, respectively. We then assessed how both types of information predicted memory errors for scene boundaries using a standard rapid serial visual presentation (RSVP) forced error paradigm. A linear regression model indicated that memories for scene boundaries were significantly predicted by semantic, but not visual, information and that this effect persisted when scene depth was considered. Boundary extension was observed for images with low semantic information, and contraction was observed for images with high semantic information. This suggests a cognitive process that normalizes the amount of semantic information held in memory.
-
Abstract Our brains continuously acquire sensory information and make judgments even when visual information is limited. In some circumstances, an ambiguous object can be recognized from how it moves, such as an animal hopping or a plane flying overhead. Yet it remains unclear how movement is processed by brain areas involved in visual object recognition. Here we investigate whether inferior temporal (IT) cortex, an area known for its relevance in visual form processing, has access to motion information during recognition. We developed a matching task that required monkeys to recognize moving shapes with variable levels of shape degradation. Neural recordings in area IT showed that, surprisingly, some IT neurons responded stronger to degraded shapes than clear ones. Furthermore, neurons exhibited motion sensitivity at different times during the presentation of the blurry target. Population decoding analyses showed that motion patterns could be decoded from IT neuron pseudo-populations. Contrary to previous findings, these results suggest that neurons in IT can integrate visual motion and shape information, particularly when shape information is degraded, in a way that has been previously overlooked. Our results highlight the importance of using challenging multifeature recognition tasks to understand the role of area IT in naturalistic visual object recognition.