Children (
Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.
more » « less- NSF-PAR ID:
- 10408068
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 13
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract N = 103, 4–9 years, 59 females, 84% White, c. 2019) completed visual processing, visual feature integration (color, luminance, motion), and visual search tasks. Contrast sensitivity and feature search improved with age similarly for luminance and color‐defined targets. Incidental feature integration improved more with age for color‐motion than luminance‐motion. Individual differences in feature search ( = .11) and incidental feature integration ( = .06) mediated age‐related changes in conjunction visual search, an index of visual selective attention. These findings suggest that visual selective attention is best conceptualized as a series of developmental trajectories, within an individual, that vary by an object's defining features. These data have implications for design of educational and interventional strategies intended to maximize attention for learning and memory. -
Abstract Previously rewarded stimuli slow response times (RTs) during visual search, despite being physically non-salient and no longer task-relevant or rewarding. Such value-driven attentional capture (VDAC) has been measured in a training-test paradigm. In the training phase, the search target is rendered in one of two colors (one predicting high reward and the other low reward). In this study, we modified this traditional training phase to include pre-cues that signaled reliable or unreliable information about the trial-to-trial color of the training phase search target. Reliable pre-cues indicated the upcoming target color with certainty, whereas unreliable pre-cues indicated the target was equally likely to be one of two distinct colors. Thus reliable and unreliable pre-cues provided certain and uncertain information, respectively, about the magnitude of the upcoming reward. We then tested for VDAC in a traditional test phase. We found that unreliably pre-cued distractors slowed RTs and drew more initial eye movements during search for the test-phase target, relative to reliably pre-cued distractors, thus providing novel evidence for an influence of information reliability on attentional capture. That said, our experimental manipulation also eliminated
value-dependency (i.e., slowed RTs when a high-reward-predicting distractor was present relative to a low-reward-predicting distractor) for both kinds of distractors. Taken together, these results suggest that target-color uncertainty, rather than reward magnitude, played a critical role in modulating the allocation of value-driven attention in this study. -
Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.more » « less
-
Abstract Salient objects grab attention because they stand out from their surroundings. Whether this phenomenon is accomplished by bottom-up sensory processing or requires top-down guidance is debated. We tested these alternative hypotheses by measuring how early and in which cortical layer(s) neural spiking distinguished a target from a distractor. We measured synaptic and spiking activity across cortical columns in mid-level area V4 of male macaque monkeys performing visual search for a color singleton. A neural signature of attentional capture was observed in the earliest response in the input layer 4. The magnitude of this response predicted response time and accuracy. Errant behavior followed errant selection. Because this response preceded top-down influences and arose in the cortical layer not targeted by top-down connections, these findings demonstrate that feedforward activation of sensory cortex can underlie attentional priority.
-
Here, we report on the long-term stability of changes in behavior and brain activity following perceptual learning of conjunctions of simple motion features. Participants were trained for 3 weeks on a visual search task involving the detection of a dot moving in a “v”-shaped target trajectory among inverted “v”-shaped distractor trajectories. The first and last training sessions were carried out during functional magnetic resonance imaging (fMRI). Learning stability was again examined behaviorally and using fMRI 3 years after the end of training. Results show that acquired behavioral improvements were remarkably stable over time and that these changes were specific to trained target and distractor trajectories. A similar pattern was observed on the neuronal level, when the representation of target and distractor stimuli was examined in early retinotopic visual cortex (V1–V3): training enhanced activity for the target relative to the surrounding distractors in the search array and this enhancement persisted after 3 years. However, exchanging target and distractor trajectories abolished both neuronal and behavioral effects, suggesting that training-induced changes in stimulus representation are specific to trained stimulus identities.more » « less