skip to main content


Title: Feature-based attention warps the perception of visual features
Abstract

Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.

 
more » « less
Award ID(s):
1850738 2032773
NSF-PAR ID:
10408068
Author(s) / Creator(s):
; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
13
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Previously rewarded stimuli slow response times (RTs) during visual search, despite being physically non-salient and no longer task-relevant or rewarding. Such value-driven attentional capture (VDAC) has been measured in a training-test paradigm. In the training phase, the search target is rendered in one of two colors (one predicting high reward and the other low reward). In this study, we modified this traditional training phase to include pre-cues that signaled reliable or unreliable information about the trial-to-trial color of the training phase search target. Reliable pre-cues indicated the upcoming target color with certainty, whereas unreliable pre-cues indicated the target was equally likely to be one of two distinct colors. Thus reliable and unreliable pre-cues provided certain and uncertain information, respectively, about the magnitude of the upcoming reward. We then tested for VDAC in a traditional test phase. We found that unreliably pre-cued distractors slowed RTs and drew more initial eye movements during search for the test-phase target, relative to reliably pre-cued distractors, thus providing novel evidence for an influence of information reliability on attentional capture. That said, our experimental manipulation also eliminatedvalue-dependency(i.e.,slowed RTs when a high-reward-predicting distractor was present relative to a low-reward-predicting distractor) for both kinds of distractors. Taken together, these results suggest that target-color uncertainty, rather than reward magnitude, played a critical role in modulating the allocation of value-driven attention in this study.

     
    more » « less
  2. Visual working memory (VWM) representations interact with attentional guidance, but there is controversy over whether multiple VWM items simultaneously influence attentional guidance. Extant studies relied on continuous variables like response times, which can obscure capture – especially if VWM representations cycle through interactive and non-interactive states. Previous conflicting findings regarding guidance when under high working memory (WM) load may be due to the use of noisier response time measures that mix capture and non-capture trials. Thus, we employed an oculomotor paradigm to characterize discrete attentional capture events under both high and low VWM load. Participants held one or two colors in memory, then executed a saccade to a target disk. On some trials, a distractor (sometimes VWM-matching) appeared simultaneously with the target. Eye movements were more frequently directed to a VWM-matching than a non-matching distractor for both load conditions. However, oculomotor capture by a VWM-matching distractor occurred less frequently under high compared with low load. These results suggest that attention is automatically guided toward items matching only one of two colors held in memory at a time, suggesting that items in VWM may cycle through attention-guiding and not-guiding states when more than one item is held in VWM and the task does not require that multiple items be maintained in an active, attention-guiding state. 
    more » « less
  3. Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features. 
    more » « less
  4. Here, we report on the long-term stability of changes in behavior and brain activity following perceptual learning of conjunctions of simple motion features. Participants were trained for 3 weeks on a visual search task involving the detection of a dot moving in a “v”-shaped target trajectory among inverted “v”-shaped distractor trajectories. The first and last training sessions were carried out during functional magnetic resonance imaging (fMRI). Learning stability was again examined behaviorally and using fMRI 3 years after the end of training. Results show that acquired behavioral improvements were remarkably stable over time and that these changes were specific to trained target and distractor trajectories. A similar pattern was observed on the neuronal level, when the representation of target and distractor stimuli was examined in early retinotopic visual cortex (V1–V3): training enhanced activity for the target relative to the surrounding distractors in the search array and this enhancement persisted after 3 years. However, exchanging target and distractor trajectories abolished both neuronal and behavioral effects, suggesting that training-induced changes in stimulus representation are specific to trained stimulus identities. 
    more » « less
  5. Imagine you have lost your cell phone. Your eyes scan the cluttered table in front of you, searching for its familiar blue case. But what is happening within the visual areas of your brain while you search? One possibility is that neurons that represent relevant features such as 'blue' and 'rectangular' increase their activity. This might help you spot your phone among all the other objects on the table. Paying attention to specific features improves our performance on visual tasks that require detecting those features. The 'feature similarity gain model' proposes that this is because attention increases the activity of neurons sensitive to specific target features, such as ‘blue’ in the example above. But is this how the brain solves such challenges in practice? Previous studies examining this issue have relied on correlations. They have shown that increases in neural activity correlate with improved performance on visual tasks. But correlation does not imply causation. Lindsay and Miller have now used a computer model of the brain’s visual pathway to examine whether changes in neural activity cause improved performance. The model was trained to use feature similarity gain to detect an object within a set of photographs. As predicted, changes in activity like those that occur in the brain did indeed improve the model’s performance. Moreover, activity changes at later stages of the model's processing pathway produced bigger improvements than activity changes earlier in the pathway. This may explain why attention affects neural activity more at later stages in the visual pathway. But feature similarity gain is not the only possible explanation for the results. Lindsay and Miller show that another pattern of activity change also enhanced the model’s performance, and propose an experiment to distinguish between the two possibilities. Overall, these findings increase our understanding of how the brain processes sensory information. Work is ongoing to teach computers to process images as efficiently as the human visual system. The computer model used in this study is similar to those used in state-of-the-art computer vision. These findings could thus help advance artificial sensory processing too. 
    more » « less