Specific features of visual objects innately draw approach responses in animals, and provide natural signals of potential reward. However, visual sampling behaviours and the detection of salient, rewarding stimuli are context and behavioural state-dependent and it remains unclear how visual perception and orienting responses change with specific expectations. To start to address this question, we employed a virtual stimulus orienting paradigm based on prey capture to quantify the conditional expression of visual stimulus-evoked innate approaches in freely moving mice. We found that specific combinations of stimulus features selectively evoked innate approach or freezing responses when stimuli were unexpected. We discovered that prey capture experience, and therefore the expectation of prey in the environment, selectively modified approach frequency, as well as altered those visual features that evoked approach. Thus, we found that mice exhibit robust and selective orienting responses to parameterized visual stimuli that can be robustly and specifically modified via natural experience. This work provides critical insight into how natural appetitive behaviours are driven by both specific features of visual motion and internal states that alter stimulus salience. 
                        more » 
                        « less   
                    
                            
                            Distributed Neural Systems Support Flexible Attention Updating during Category Learning
                        
                    
    
            Abstract To accurately categorize items, humans learn to selectively attend to the stimulus dimensions that are most relevant to the task. Models of category learning describe how attention changes across trials as labeled stimuli are progressively observed. The Adaptive Attention Representation Model (AARM), for example, provides an account in which categorization decisions are based on the perceptual similarity of a new stimulus to stored exemplars, and dimension-wise attention is updated on every trial in the direction of a feedback-based error gradient. As such, attention modulation as described by AARM requires interactions among processes of orienting, visual perception, memory retrieval, prediction error, and goal maintenance to facilitate learning. The current study explored the neural bases of attention mechanisms using quantitative predictions from AARM to analyze behavioral and fMRI data collected while participants learned novel categories. Generalized linear model analyses revealed patterns of BOLD activation in the parietal cortex (orienting), visual cortex (perception), medial temporal lobe (memory retrieval), basal ganglia (prediction error), and pFC (goal maintenance) that covaried with the magnitude of model-predicted attentional tuning. Results are consistent with AARM's specification of attention modulation as a dynamic property of distributed cognitive systems. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1847603
- PAR ID:
- 10421956
- Date Published:
- Journal Name:
- Journal of Cognitive Neuroscience
- Volume:
- 34
- Issue:
- 10
- ISSN:
- 0898-929X
- Page Range / eLocation ID:
- 1761 to 1779
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Prefrontal cortex modulates sensory signals in extrastriate visual cortex, in part via its direct projections from the frontal eye field (FEF), an area involved in selective attention. We find that working memory-related activity is a dominant signal within FEF input to visual cortex. Although this signal alone does not evoke spiking responses in areas V4 and MT during memory, the gain of visual responses in these areas increases, and neuronal receptive fields expand and shift towards the remembered location, improving the stimulus representation by neuronal populations. These results provide a basis for enhancing the representation of working memory targets and implicate persistent FEF activity as a basis for the interdependence of working memory and selective attention.more » « less
- 
            Category learning and visual perception are fundamentally interactive processes, such that successful categorization often depends on the ability to make fine visual discriminations between stimuli that vary on continuously valued dimensions. Research suggests that category learning can improve perceptual discrimination along the stimulus dimensions that predict category membership and that these perceptual enhancements are a byproduct of functional plasticity in the visual system. However, the precise mechanisms underlying learning-dependent sensory modulation in categorization are not well understood. We hypothesized that category learning leads to a representational sharpening of underlying sensory populations tuned to values at or near the category boundary. Furthermore, such sharpening should occur largely during active learning of new categories. These hypotheses were tested using fMRI and a theoretically constrained model of vision to quantify changes in the shape of orientation representations while human adult subjects learned to categorize physically identical stimuli based on either an orientation rule (N = 12) or an orthogonal spatial frequency rule (N = 13). Consistent with our predictions, modeling results revealed relatively enhanced reconstructed representations of stimulus orientation in visual cortex (V1–V3) only for orientation rule learners. Moreover, these reconstructed representations varied as a function of distance from the category boundary, such that representations for challenging stimuli near the boundary were significantly sharper than those for stimuli at the category centers. These results support an efficient model of plasticity wherein only the sensory populations tuned to the most behaviorally relevant regions of feature space are enhanced during category learning.more » « less
- 
            Dynamic predictive coding: A model of hierarchical sequence learning and prediction in the neocortexRubin, Jonathan (Ed.)We introduce dynamic predictive coding, a hierarchical model of spatiotemporal prediction and sequence learning in the neocortex. The model assumes that higher cortical levels modulate the temporal dynamics of lower levels, correcting their predictions of dynamics using prediction errors. As a result, lower levels form representations that encode sequences at shorter timescales (e.g., a single step) while higher levels form representations that encode sequences at longer timescales (e.g., an entire sequence). We tested this model using a two-level neural network, where the top-down modulation creates low-dimensional combinations of a set of learned temporal dynamics to explain input sequences. When trained on natural videos, the lower-level model neurons developed space-time receptive fields similar to those of simple cells in the primary visual cortex while the higher-level responses spanned longer timescales, mimicking temporal response hierarchies in the cortex. Additionally, the network’s hierarchical sequence representation exhibited both predictive and postdictive effects resembling those observed in visual motion processing in humans (e.g., in the flash-lag illusion). When coupled with an associative memory emulating the role of the hippocampus, the model allowed episodic memories to be stored and retrieved, supporting cue-triggered recall of an input sequence similar to activity recall in the visual cortex. When extended to three hierarchical levels, the model learned progressively more abstract temporal representations along the hierarchy. Taken together, our results suggest that cortical processing and learning of sequences can be interpreted as dynamic predictive coding based on a hierarchical spatiotemporal generative model of the visual world.more » « less
- 
            Episodic memories are records of personally experienced events, coded neurally via the hippocampus and sur- rounding medial temporal lobe cortex. Information about the neural signal corresponding to a memory representation can be measured in fMRI data when the pattern across voxels is examined. Prior studies have found that similarity in the voxel patterns across repetition of a to-be-remembered stimulus predicts later memory retrieval, but the results are inconsistent across studies. The current study investigates the possibility that cognitive goals (defined here via the task instructions given to participants) during encoding affect the voxel pattern that will later support memory retrieval, and therefore that neural representations cannot be interpreted based on the stimulus alone. The behavioral results showed that exposure to variable cognitive tasks across repetition of events benefited subsequent memory retrieval. Voxel patterns in the hippocampus indicated a significant interaction between cognitive tasks (variable vs. consistent) and memory (remembered vs. forgotten) such that reduced voxel pattern similarity for repeated events with variable cognitive tasks, but not consistent cognitive tasks, sup- ported later memory success. There was no significant interaction in neural pattern similarity between cognitive tasks and memory success in medial temporal cortices or lateral occipital cortex. Instead, higher similarity in voxel patterns in right medial temporal cortices was associated with later memory retrieval, regardless of cognitive task. In conclusion, we found that the relationship between pattern similarity across repeated encoding and memory success in the hippocampus (but not medial temporal lobe cortex) changes when the cognitive task during encoding does or does not vary across repetitions of the event.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    