Abstract Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.
more »
« less
Individual Differences in Working Memory and the N2pc
The lateralized ERP N2pc component has been shown to be an effective marker of attentional object selection when elicited in a visual search task, specifically reflecting the selection of a target item among distractors. Moreover, when targets are known in advance, the visual search process is guided by representations of target features held in working memory at the time of search, thus guiding attention to objects with target-matching features. Previous studies have shown that manipulating working memory availability via concurrent tasks or within task manipulations influences visual search performance and the N2pc. Other studies have indicated that visual (non-spatial) vs. spatial working memory manipulations have differential contributions to visual search. To investigate this the current study assesses participants' visual and spatial working memory ability independent of the visual search task to determine whether such individual differences in working memory affect task performance and the N2pc. Participants ( n = 205) completed a visual search task to elicit the N2pc and separate visual working memory (VWM) and spatial working memory (SPWM) assessments. Greater SPWM, but not VWM, ability is correlated with and predicts higher visual search accuracy and greater N2pc amplitudes. Neither VWM nor SPWM was related to N2pc latency. These results provide additional support to prior behavioral and neural visual search findings that spatial WM availability, whether as an ability of the participant's processing system or based on task demands, plays an important role in efficient visual search.
more »
« less
- PAR ID:
- 10312327
- Date Published:
- Journal Name:
- Frontiers in Human Neuroscience
- Volume:
- 15
- ISSN:
- 1662-5161
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Multiple types of memory guide attention: Both long-term memory (LTM) and working memory (WM) effectively guide visual search. Furthermore, both types of memories can capture attention automatically, even when detrimental to performance. It is less clear, however, how LTM and WM cooperate or compete to guide attention in the same task. In a series of behavioral experiments, we show that LTM and WM reliably cooperate to guide attention: Visual search is faster when both memories cue attention to the same spatial location (relative to when only one memory can guide attention). LTM and WM competed to guide attention in more limited circumstances: Competition only occurred when these memories were in different dimensions – particularly when participants searched for a shape and held an accessory color in mind. Finally, we found no evidence for asymmetry in either cooperation or competition: There was no evidence that WM helped (or hindered) LTM-guided search more than the other way around. This lack of asymmetry was found despite differences in LTM-guided and WM-guided search overall, and differences in how two LTMs and two WMs compete or cooperate with each other to guide attention. This work suggests that, even if only one memory is currently task-relevant, WM and LTM can cooperate to guide attention; they can also compete when distracting features are salient enough. This work elucidates interactions between WM and LTM during attentional guidance, adding to the literature on costs and benefits to attention from multiple active memories.more » « less
-
Prominent theories of visual working memory postulate that the capacity to maintain a particular visual feature is fixed. In contrast to these theories, recent studies have demonstrated that meaningful objects are better remembered than simple, nonmeaningful stimuli. Here, we tested whether this is solely because meaningful stimuli can recruit additional features—and thus more storage capacity—or whether simple visual features that are not themselves meaningful can also benefit from being part of a meaningful object. Across five experiments (30 young adults each), we demonstrated that visual working memory capacity for color is greater when colors are part of recognizable real-world objects compared with unrecognizable objects. Our results indicate that meaningful stimuli provide a potent scaffold to help maintain simple visual feature information, possibly because they effectively increase the objects’ distinctiveness from each other and reduce interference.more » « less
-
null (Ed.)Abstract Almost all models of visual working memory—the cognitive system that holds visual information in an active state—assume it has a fixed capacity: Some models propose a limit of three to four objects, where others propose there is a fixed pool of resources for each basic visual feature. Recent findings, however, suggest that memory performance is improved for real-world objects. What supports these increases in capacity? Here, we test whether the meaningfulness of a stimulus alone influences working memory capacity while controlling for visual complexity and directly assessing the active component of working memory using EEG. Participants remembered ambiguous stimuli that could either be perceived as a face or as meaningless shapes. Participants had higher performance and increased neural delay activity when the memory display consisted of more meaningful stimuli. Critically, by asking participants whether they perceived the stimuli as a face or not, we also show that these increases in visual working memory capacity and recruitment of additional neural resources are because of the subjective perception of the stimulus and thus cannot be driven by physical properties of the stimulus. Broadly, this suggests that the capacity for active storage in visual working memory is not fixed but that more meaningful stimuli recruit additional working memory resources, allowing them to be better remembered.more » « less
-
Abstract Working memory (WM) is a capacity- and duration-limited system that forms a temporal bridge between fleeting sensory phenomena and possible actions. But how are the contents of WM used to guide behavior? A recent high-profile study reported evidence for simultaneous access to WM content and linked motor plans during WM-guided behavior, challenging serial models where task-relevant WM content is first selected and then mapped on to a task-relevant motor response. However, the task used in that study was not optimized to distinguish the selection of spatial versus nonspatial visual information stored in memory, nor to distinguish whether or how the chronometry of selecting nonspatial visual information stored in memory might differ from the selection of linked motor plans. Here, we revisited the chronometry of spatial, feature, and motor selection during WM-guided behavior using a task optimized to disentangle these processes. Concurrent EEG and eye position recordings revealed clear evidence for temporally dissociable spatial, feature, and motor selection during this task. Thus, our data reveal the existence of multiple WM selection mechanisms that belie conceptualizations of WM-guided behavior based on purely serial or parallel visuomotor processing.more » « less
An official website of the United States government

