skip to main content


This content will become publicly available on July 17, 2024

Title: Category Learning Selectively Enhances Representations of Boundary-Adjacent Exemplars in Early Visual Cortex

Category learning and visual perception are fundamentally interactive processes, such that successful categorization often depends on the ability to make fine visual discriminations between stimuli that vary on continuously valued dimensions. Research suggests that category learning can improve perceptual discrimination along the stimulus dimensions that predict category membership and that these perceptual enhancements are a byproduct of functional plasticity in the visual system. However, the precise mechanisms underlying learning-dependent sensory modulation in categorization are not well understood. We hypothesized that category learning leads to a representational sharpening of underlying sensory populations tuned to values at or near the category boundary. Furthermore, such sharpening should occur largely during active learning of new categories. These hypotheses were tested using fMRI and a theoretically constrained model of vision to quantify changes in the shape of orientation representations while human adult subjects learned to categorize physically identical stimuli based on either an orientation rule (N = 12) or an orthogonal spatial frequency rule (N = 13). Consistent with our predictions, modeling results revealed relatively enhanced reconstructed representations of stimulus orientation in visual cortex (V1–V3) only for orientation rule learners. Moreover, these reconstructed representations varied as a function of distance from the category boundary, such that representations for challenging stimuli near the boundary were significantly sharper than those for stimuli at the category centers. These results support an efficient model of plasticity wherein only the sensory populations tuned to the most behaviorally relevant regions of feature space are enhanced during category learning.

 
more » « less
NSF-PAR ID:
10474167
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
DOI PREFIX: 10.1523
Date Published:
Journal Name:
The Journal of Neuroscience
Volume:
44
Issue:
3
ISSN:
0270-6474
Format(s):
Medium: X Size: Article No. e1039232023
Size(s):
["Article No. e1039232023"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The grouping of sensory stimuli into categories is fundamental to cognition. Previous research in the visual and auditory systems supports a two‐stage processing hierarchy that underlies perceptual categorization: (a) a “bottom‐up” perceptual stage in sensory cortices where neurons show selectivity for stimulus features and (b) a “top‐down” second stage in higher level cortical areas that categorizes the stimulus‐selective input from the first stage. In order to test the hypothesis that the two‐stage model applies to the somatosensory system, 14 human participants were trained to categorize vibrotactile stimuli presented to their right forearm. Then, during an fMRI scan, participants actively categorized the stimuli. Representational similarity analysis revealed stimulus selectivity in areas including the left precentral and postcentral gyri, the supramarginal gyrus, and the posterior middle temporal gyrus. Crucially, we identified a single category‐selective region in the left ventral precentral gyrus. Furthermore, an estimation of directed functional connectivity delivered evidence for robust top‐down connectivity from the second to first stage. These results support the validity of the two‐stage model of perceptual categorization for the somatosensory system, suggesting common computational principles and a unified theory of perceptual categorization across the visual, auditory, and somatosensory systems.

     
    more » « less
  2. Abstract

    Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.

     
    more » « less
  3. Abstract To accurately categorize items, humans learn to selectively attend to the stimulus dimensions that are most relevant to the task. Models of category learning describe how attention changes across trials as labeled stimuli are progressively observed. The Adaptive Attention Representation Model (AARM), for example, provides an account in which categorization decisions are based on the perceptual similarity of a new stimulus to stored exemplars, and dimension-wise attention is updated on every trial in the direction of a feedback-based error gradient. As such, attention modulation as described by AARM requires interactions among processes of orienting, visual perception, memory retrieval, prediction error, and goal maintenance to facilitate learning. The current study explored the neural bases of attention mechanisms using quantitative predictions from AARM to analyze behavioral and fMRI data collected while participants learned novel categories. Generalized linear model analyses revealed patterns of BOLD activation in the parietal cortex (orienting), visual cortex (perception), medial temporal lobe (memory retrieval), basal ganglia (prediction error), and pFC (goal maintenance) that covaried with the magnitude of model-predicted attentional tuning. Results are consistent with AARM's specification of attention modulation as a dynamic property of distributed cognitive systems. 
    more » « less
  4. Perceptual experiences may arise from neuronal activity patterns in mammalian neocortex. We probed mouse neocortex during visual discrimination using a red-shifted channelrhodopsin (ChRmine, discovered through structure-guided genome mining) alongside multiplexed multiphoton-holography (MultiSLM), achieving control of individually specified neurons spanning large cortical volumes with millisecond precision. Stimulating a critical number of stimulus-orientation-selective neurons drove widespread recruitment of functionally related neurons, a process enhanced by (but not requiring) orientation-discrimination task learning. Optogenetic targeting of orientation-selective ensembles elicited correct behavioral discrimination. Cortical layer–specific dynamics were apparent, as emergent neuronal activity asymmetrically propagated from layer 2/3 to layer 5, and smaller layer 5 ensembles were as effective as larger layer 2/3 ensembles in eliciting orientation discrimination behavior. Population dynamics emerging after optogenetic stimulation both correctly predicted behavior and resembled natural internal representations of visual stimuli at cellular resolution over volumes of cortex. 
    more » « less
  5. Abstract

    We apply the “wisdom of the crowd” idea to human category learning, using a simple approach that combines people's categorization decisions by taking the majority decision. We first show that the aggregated crowd category learning behavior found by this method performs well, learning categories more quickly than most or all individuals for 28 previously collected datasets. We then extend the approach so that it does not require people to categorize every stimulus. We do this using a model‐based method that predicts the categorization behavior people would produce for new stimuli, based on their behavior with observed stimuli, and uses the majority of these predicted decisions. We demonstrate and evaluate the model‐based approach in two case studies. In the first, we use the general recognition theory decision‐bound model of categorization (Ashby & Townsend,) to infer each person's decision boundary for two categories of perceptual stimuli, and we use these inferences to make aggregated predictions about new stimuli. In the second, we use the generalized context model exemplar model of categorization (Nosofsky,) to infer each person's selective attention for face stimuli, and we use these inferences to make aggregated predictions about withheld stimuli. In both case studies, we show that our method successfully predicts the category of unobserved stimuli, and we emphasize that the aggregated crowd decisions arise from psychologically interpretable processes and parameters. We conclude by discussing extensions and potential real‐world applications of the approach.

     
    more » « less