Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications. NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications.
more »
« less
Cost of maintaining attentional templates for multiple colors revealed by EEG decoding
Abstract Attention to a feature enhances the sensory representation of that feature. However, it is less clear whether attentional modulation is limited when needing to attend to multiple features. Here, we studied both the behavioral and neural correlates of the attentional limit by examining the effectiveness of attentional enhancement of one versus two color features. We recorded electroencephalography (EEG) while observers completed a color-coherence detection task in which they detected a weak coherence signal, an over-representation of a target color. Before stimulus onset, we presented either one or two valid color cues. We found that, on the one-cue trials compared with the two-cue trials, observers were faster and more accurate, indicating that observers could more effectively attend to a single color at a time. Similar behavioral deficits associated with attending to multiple colors were observed in a pre-EEG practice session with one-, two-, three-, and no-cue trials. Further, we were able to decode the target color using the EEG signals measured from the posterior electrodes. Notably, we found that decoding accuracy was greater on the one-cue than on two-cue trials, indicating a stronger color signal on one-cue trials likely due to stronger attentional enhancement. Lastly, we observed a positive correlation between the decoding effect and the behavioral effect comparing one-cue and two-cue trials, suggesting that the decoded neural signals are functionally associated with behavior. Overall, these results provide behavioral and neural evidence pointing to a strong limit in the attentional enhancement of multiple features and suggest that there is a cost in maintaining multiple attentional templates in an active state.
more »
« less
- Award ID(s):
- 2019995
- PAR ID:
- 10587350
- Publisher / Repository:
- DOI PREFIX: 10.1162
- Date Published:
- Journal Name:
- Imaging Neuroscience
- Volume:
- 3
- ISSN:
- 2837-6056
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.more » « less
-
null (Ed.)Moments of inattention to our surroundings may be essential to optimal cognitive functioning. Here, we investigated the hypothesis that humans spontaneously switch between two opposing attentional states during wakefulness—one in which we attend to the external environment (an “online” state) and one in which we disengage from the sensory environment to focus our attention internally (an “offline” state). We created a data-driven model of this proposed alternation between “online” and “offline” attentional states in humans, on a seconds-level timescale. Participants ( n = 34) completed a sustained attention to response task while undergoing simultaneous high-density EEG and pupillometry recording and intermittently reporting on their subjective experience. “Online” and “offline” attentional states were initially defined using a cluster analysis applied to multimodal measures of (1) EEG spectral power, (2) pupil diameter, (3) RT, and (4) self-reported subjective experience. We then developed a classifier that labeled trials as belonging to the online or offline cluster with >95% accuracy, without requiring subjective experience data. This allowed us to classify all 5-sec trials in this manner, despite the fact that subjective experience was probed on only a small minority of trials. We report evidence of statistically discriminable “online” and “offline” states matching the hypothesized characteristics. Furthermore, the offline state strongly predicted memory retention for one of two verbal learning tasks encoded immediately prior. Together, these observations suggest that seconds-timescale alternation between online and offline states is a fundamental feature of wakefulness and that this may serve a memory processing function.more » « less
-
In models of visual spatial attention control, it is commonly held that top–down control signals originate in the dorsal attention network, propagating to the visual cortex to modulate baseline neural activity and bias sensory processing. However, the precise distribution of these top–down influences across different levels of the visual hierarchy is debated. In addition, it is unclear whether these baseline neural activity changes translate into improved performance. We analyzed attention-related baseline activity during the anticipatory period of a voluntary spatial attention task, using two independent functional magnetic resonance imaging datasets and two analytic approaches. First, as in prior studies, univariate analysis showed that covert attention significantly enhanced baseline neural activity in higher-order visual areas contralateral to the attended visual hemifield, while effects in lower-order visual areas (e.g., V1) were weaker and more variable. Second, in contrast, multivariate pattern analysis (MVPA) revealed significant decoding of attention conditions across all visual cortical areas, with lower-order visual areas exhibiting higher decoding accuracies than higher-order areas. Third, decoding accuracy, rather than the magnitude of univariate activation, was a better predictor of a subject's stimulus discrimination performance. Finally, the MVPA results were replicated across two experimental conditions, where the direction of spatial attention was either externally instructed by a cue or based on the participants’ free choice decision about where to attend. Together, these findings offer new insights into the extent of attentional biases in the visual hierarchy under top–down control and how these biases influence both sensory processing and behavioral performance.more » « less
-
Abstract Studies of voluntary visual spatial attention have used attention-directing cues, such as arrows, to induce or instruct observers to focus selective attention on relevant locations in visual space to detect or discriminate subsequent target stimuli. In everyday vision, however, voluntary attention is influenced by a host of factors, most of which are quite different from the laboratory paradigms that use attention-directing cues. These factors include priming, experience, reward, meaning, motivations, and high-level behavioral goals. Attention that is endogenously directed in the absence of external attention-directing cues has been referred to as “self-initiated attention” or, as in our prior work, as “willed attention” where volunteers decide where to attend in response to a prompt to do so. Here, we used a novel paradigm that eliminated external influences (i.e., attention-directing cues and prompts) about where and/or when spatial attention should be directed. Using machine learning decoding methods, we showed that the well known lateralization of EEG alpha power during spatial attention was also present during purely self-generated attention. By eliminating explicit cues or prompts that affect the allocation of voluntary attention, this work advances our understanding of the neural correlates of attentional control and provides steps toward the development of EEG-based brain–computer interfaces that tap into human intentions.more » « less
An official website of the United States government
