Abstract Holistic processing of face and non-face stimuli has been framed as a perceptual strategy, with classic hallmarks of holistic processing, such as the composite effect, reflecting a failure of selective attention, which is a consequence of this strategy. Further, evidence that holistic processing is impacted by training different patterns of attentional prioritization suggest that it may be a result of learned attention to the whole, which renders it difficult to attend to only part of a stimulus. If so, holistic processing should be modulated by the same factors that shape attentional selection, such as the probability that distracting or task-relevant information will be present. In contrast, other accounts suggest that it is the match to an internal face template that triggers specialized holistic processing mechanisms. Here we probed these accounts by manipulating the probability, across different testing sessions, that the task-irrelevant face part in the composite face task will contain task-congruent or -incongruent information. Attentional accounts of holistic processing predict that when the probability that the task-irrelevant part contains congruent information is low (25%), holistic processing should be attenuated compared to when this probability is high (75%). In contrast, template-based accounts of holistic face processing predict that it will be unaffected by manipulation given the integrity of the faces remains intact. Experiment 1 found evidence consistent with attentional accounts of holistic face processing and Experiment 2 extends these findings to holistic processing of non-face stimuli. These findings are broadly consistent with learned attention accounts of holistic processing.
more »
« less
Learning attentional templates for value-based decision-making
Attention filters sensory inputs to enhance task-relevant information. It is guided by an “attentional template” that represents the stimulus features that are currently relevant. To understand how the brain learns and uses templates, we trained monkeys to perform a visual search task that required them to repeatedly learn new attentional templates. Neural recordings found that templates were represented across the prefrontal and parietal cortex in a structured manner, such that perceptually neighboring templates had similar neural representations. When the task changed, a new attentional template was learned by incrementally shifting the template toward rewarded features. Finally, we found that attentional templates transformed stimulus features into a common value representation that allowed the same decision-making mechanisms to deploy attention, regardless of the identity of the template. Altogether, our results provide insight into the neural mechanisms by which the brain learns to control attention and how attention can be flexibly deployed across tasks.
more »
« less
- Award ID(s):
- 2143391
- PAR ID:
- 10521736
- Publisher / Repository:
- Cell Press
- Date Published:
- Journal Name:
- Cell
- Volume:
- 187
- Issue:
- 6
- ISSN:
- 0092-8674
- Page Range / eLocation ID:
- 1476 to 1489.e21
- Subject(s) / Keyword(s):
- attention cognitive control reinforcement learning reward learning visual search decision-making prefrontal cortex parietal cortex
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Attention to a feature enhances the sensory representation of that feature. However, it is less clear whether attentional modulation is limited when needing to attend to multiple features. Here, we studied both the behavioral and neural correlates of the attentional limit by examining the effectiveness of attentional enhancement of one versus two color features. We recorded electroencephalography (EEG) while observers completed a color-coherence detection task in which they detected a weak coherence signal, an over-representation of a target color. Before stimulus onset, we presented either one or two valid color cues. We found that, on the one-cue trials compared with the two-cue trials, observers were faster and more accurate, indicating that observers could more effectively attend to a single color at a time. Similar behavioral deficits associated with attending to multiple colors were observed in a pre-EEG practice session with one-, two-, three-, and no-cue trials. Further, we were able to decode the target color using the EEG signals measured from the posterior electrodes. Notably, we found that decoding accuracy was greater on the one-cue than on two-cue trials, indicating a stronger color signal on one-cue trials likely due to stronger attentional enhancement. Lastly, we observed a positive correlation between the decoding effect and the behavioral effect comparing one-cue and two-cue trials, suggesting that the decoded neural signals are functionally associated with behavior. Overall, these results provide behavioral and neural evidence pointing to a strong limit in the attentional enhancement of multiple features and suggest that there is a cost in maintaining multiple attentional templates in an active state.more » « less
-
Abstract To accurately categorize items, humans learn to selectively attend to the stimulus dimensions that are most relevant to the task. Models of category learning describe how attention changes across trials as labeled stimuli are progressively observed. The Adaptive Attention Representation Model (AARM), for example, provides an account in which categorization decisions are based on the perceptual similarity of a new stimulus to stored exemplars, and dimension-wise attention is updated on every trial in the direction of a feedback-based error gradient. As such, attention modulation as described by AARM requires interactions among processes of orienting, visual perception, memory retrieval, prediction error, and goal maintenance to facilitate learning. The current study explored the neural bases of attention mechanisms using quantitative predictions from AARM to analyze behavioral and fMRI data collected while participants learned novel categories. Generalized linear model analyses revealed patterns of BOLD activation in the parietal cortex (orienting), visual cortex (perception), medial temporal lobe (memory retrieval), basal ganglia (prediction error), and pFC (goal maintenance) that covaried with the magnitude of model-predicted attentional tuning. Results are consistent with AARM's specification of attention modulation as a dynamic property of distributed cognitive systems.more » « less
-
Abstract Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.more » « less
-
The pulvinar, also called the lateral posterior nucleus of the thalamus in rodents, is one of the higher-order thalamic relays and the main visual extrageniculate thalamic nucleus in rodents and primates. Although primate studies report the pulvinar is engaged under attentional demands, there are open questions about the detailed role of the pulvinar in visuospatial attention. The pulvinar provides the primary thalamic input to the posterior parietal cortex (PPC). Both the pulvinar and the PPC are known to be important for visuospatial attention. Our previous work showed that neuronal activity in the PPC correlated with multiple phases of a visuospatial attention (VSA) task, including onset of the visual stimuli, decision-making, task-relevant locations, and behavioral outcomes. Here, we hypothesized that the pulvinar, as the major thalamic input to the PPC, is involved in visuospatial attention as well as in other cognitive functions related to the processing of visual information. We recorded the neuronal activity of the pulvinar in rats during their performance on the VSA task. The task was designed to engage goal-directed, top–down attention as well as stimulus-driven, bottom–up attention. Rats monitored three possible locations for the brief appearance of a target stimulus. An approach to the correct target location was followed by a liquid reward. For analysis, each trial was divided into behavioral epochs demarcated by stimulus onset, selection behavior, and approach to reward. We found that neurons in the pulvinar signaled stimulus onset and selection behavior consistent with the interpretation that the pulvinar is engaged in both bottom–up and top–down visuospatial attention. Our results also suggested that pulvinar cells responded to allocentric and egocentric task-relevant locations.more » « less
An official website of the United States government

