Abstract Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.
more »
« less
Complex background information slows down parallel search efficiency by reducing the strength of interitem interactions.
In the laboratory, visual search is often studied using uniform backgrounds. This contrasts with search in daily life, where potential search items appear against more complex backgrounds. In the present study, we examined the effects of background complexity on a parallel visual search under conditions where objects are easily segregated from the background. Target–distractor similarity was sufficiently low such that search could unfold in parallel, as indexed by reaction times that increase logarithmically with set size. The results indicate that when backgrounds are relatively simple (sandy beach with water elements), search efficiency is comparable to search using a solid background. When backgrounds are more complex (child bedroom or checkerboard), logarithmic search slopes increase compared to search on solid backgrounds, especially at higher levels of target–distractor similarity. The results are discussed in terms of different theories of visual search. It is proposed that the complex visual information occurring in between distractors slows down individual distractor rejection times by weakening the strength of interitem interactions.
more »
« less
- Award ID(s):
- 1921735
- PAR ID:
- 10472493
- Publisher / Repository:
- APA
- Date Published:
- Journal Name:
- Journal of Experimental Psychology: Human Perception and Performance
- Volume:
- 49
- Issue:
- 7
- ISSN:
- 0096-1523
- Page Range / eLocation ID:
- 1053 to 1067
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Here, we report on the long-term stability of changes in behavior and brain activity following perceptual learning of conjunctions of simple motion features. Participants were trained for 3 weeks on a visual search task involving the detection of a dot moving in a “v”-shaped target trajectory among inverted “v”-shaped distractor trajectories. The first and last training sessions were carried out during functional magnetic resonance imaging (fMRI). Learning stability was again examined behaviorally and using fMRI 3 years after the end of training. Results show that acquired behavioral improvements were remarkably stable over time and that these changes were specific to trained target and distractor trajectories. A similar pattern was observed on the neuronal level, when the representation of target and distractor stimuli was examined in early retinotopic visual cortex (V1–V3): training enhanced activity for the target relative to the surrounding distractors in the search array and this enhancement persisted after 3 years. However, exchanging target and distractor trajectories abolished both neuronal and behavioral effects, suggesting that training-induced changes in stimulus representation are specific to trained stimulus identities.more » « less
-
Abstract Previously rewarded stimuli slow response times (RTs) during visual search, despite being physically non-salient and no longer task-relevant or rewarding. Such value-driven attentional capture (VDAC) has been measured in a training-test paradigm. In the training phase, the search target is rendered in one of two colors (one predicting high reward and the other low reward). In this study, we modified this traditional training phase to include pre-cues that signaled reliable or unreliable information about the trial-to-trial color of the training phase search target. Reliable pre-cues indicated the upcoming target color with certainty, whereas unreliable pre-cues indicated the target was equally likely to be one of two distinct colors. Thus reliable and unreliable pre-cues provided certain and uncertain information, respectively, about the magnitude of the upcoming reward. We then tested for VDAC in a traditional test phase. We found that unreliably pre-cued distractors slowed RTs and drew more initial eye movements during search for the test-phase target, relative to reliably pre-cued distractors, thus providing novel evidence for an influence of information reliability on attentional capture. That said, our experimental manipulation also eliminatedvalue-dependency(i.e.,slowed RTs when a high-reward-predicting distractor was present relative to a low-reward-predicting distractor) for both kinds of distractors. Taken together, these results suggest that target-color uncertainty, rather than reward magnitude, played a critical role in modulating the allocation of value-driven attention in this study.more » « less
-
Visual tracking has achieved remarkable success in recent decades, but it remains a challenging problem due to appearance variations over time and complex cluttered background. In this paper, we adopt a tracking-by-verification scheme to overcome these challenges by determining the patch in the subsequent frame that is most similar to the target template and distinctive to the background context. A multi-stream deep similarity learning network is proposed to learn the similarity comparison model. The loss function of our network encourages the distance between a positive patch in the search region and the target template to be smaller than that between positive patch and the background patches. Within the learned feature space, even if the distance between positive patches becomes large caused by the appearance change or interference of background clutter, our method can use the relative distance to distinguish the target robustly. Besides, the learned model is directly used for tracking with no need of model updating, parameter fine-tuning and can run at 45 fps on a single GPU. Our tracker achieves state-of-the-art performance on the visual tracking benchmark compared with other recent real-time-speed trackers, and shows better capability in handling background clutter, occlusion and appearance change.more » « less
-
Visual working memory (VWM) representations interact with attentional guidance, but there is controversy over whether multiple VWM items simultaneously influence attentional guidance. Extant studies relied on continuous variables like response times, which can obscure capture – especially if VWM representations cycle through interactive and non-interactive states. Previous conflicting findings regarding guidance when under high working memory (WM) load may be due to the use of noisier response time measures that mix capture and non-capture trials. Thus, we employed an oculomotor paradigm to characterize discrete attentional capture events under both high and low VWM load. Participants held one or two colors in memory, then executed a saccade to a target disk. On some trials, a distractor (sometimes VWM-matching) appeared simultaneously with the target. Eye movements were more frequently directed to a VWM-matching than a non-matching distractor for both load conditions. However, oculomotor capture by a VWM-matching distractor occurred less frequently under high compared with low load. These results suggest that attention is automatically guided toward items matching only one of two colors held in memory at a time, suggesting that items in VWM may cycle through attention-guiding and not-guiding states when more than one item is held in VWM and the task does not require that multiple items be maintained in an active, attention-guiding state.more » « less
An official website of the United States government

