Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Our visual world consists of multiple objects, necessitating the identification of individual objects. Nevertheless, the representation of visual objects often exerts influence on each other. Even when we selectively attend to a subset of visual objects, the representations of surrounding items are encoded and influence the processing of the attended item(s). However, it remains unclear whether the effect of group ensemble representation on individual item representation occurs at the perceptual encoding phase, during the memory maintenance period, or both. Therefore, the current study conducted visual psychophysics experiments to investigate the contributions of perceptual and mnemonic bias on the observed effect of ensemble representation on individual size representation. Across five experiments, we found a consistent pattern of repulsive ensemble bias, such that the size of an individual target circle was consistently reported to be smaller than it actually was when presented alongside other circles with larger mean size, and vice versa. There was a perceptual component to the bias, but mnemonic factors also influenced its magnitude. Specifically, the repulsion bias was strongest with a short retention period (0–50 ms), then reduced within a second to a weaker magnitude that remained stable for a longer retention period (5,000 ms). Such patterns of results persisted when we facilitated the processing of ensemble representation by increasing the set size (Experiment 1B) or post-cueing the target circle so that attention was distributed across all items (Experiment 2B).more » « less
-
Abstract This opinion piece is part of a collection on the topic: “What is attention?” Despite the word's place in the common vernacular, a satisfying definition for “attention” remains elusive. Part of the challenge is there exist many different types of attention, which may or may not share common mechanisms. Here we review this literature and offer an intuitive definition that draws from aspects of prior theories and models of attention but is broad enough to recognize the various types of attention and modalities it acts upon: attention as a multi‐level system of weights and balances. While the specific mechanism(s) governing the weighting/balancing may vary across levels, the fundamental role of attention is to dynamically weigh and balance all signals—both externally‐generated and internally‐generated—such that the highest weighted signals are selected and enhanced. Top‐down, bottom‐up, and experience‐driven factors dynamically impact this balancing, and competition occurs both within and across multiple levels of processing. This idea of a multi‐level system of weights and balances is intended to incorporate both external and internal attention and capture their myriad of constantly interacting processes. We review key findings and open questions related to external attention guidance, internal attention and working memory, and broader attentional control (e.g., ongoing competition between external stimuli and internal thoughts) within the framework of this analogy. We also speculate about the implications of failures of attention in terms of weights and balances, ranging from momentary one‐off errors to clinical disorders, as well as attentional development and degradation across the lifespan. This article is categorized under:Psychology > AttentionNeuroscience > Cognitionmore » « less
-
Abstract Our behavioral goals shape how we process information via attentional filters that prioritize goal-relevant information, dictating both where we attend and what we attend to. When something unexpected or salient appears in the environment, it captures our spatial attention. Extensive research has focused on the spatiotemporal aspects of attentional capture, but what happens to concurrent nonspatial filters during visual distraction? Here, we demonstrate a novel, broader consequence of distraction: widespread disruption to filters that regulate category-specific object processing. We recorded fMRI while participants viewed arrays of face/house hybrid images. On distractor-absent trials, we found robust evidence for the standard signature of category-tuned attentional filtering: greater BOLD activation in fusiform face area during attend-faces blocks and in parahippocampal place area during attend-houses blocks. However, on trials where a salient distractor (white rectangle) flashed abruptly around a nontarget location, not only was spatial attention captured, but the concurrent category-tuned attentional filter was disrupted, revealing a boost in activation for the to-be-ignored category. This disruption was robust, resulting in errant processing—and early on, prioritization—of goal-inconsistent information. These findings provide a direct test of the filter disruption theory: that in addition to disrupting spatial attention, distraction also disrupts nonspatial attentional filters tuned to goal-relevant information. Moreover, these results reveal that, under certain circumstances, the filter disruption may be so profound as to induce a full reversal of the attentional control settings, which carries novel implications for both theory and real-world perception.more » « less
-
Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications. NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.more » « less
An official website of the United States government
