skip to main content


Title: Tuning Attention to Object Categories: Spatially Global Effects of Attention to Faces in Visual Processing
Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.  more » « less
Award ID(s):
1829434
NSF-PAR ID:
10120149
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Cognitive Neuroscience
Volume:
31
Issue:
7
ISSN:
0898-929X
Page Range / eLocation ID:
937 to 947
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.

     
    more » « less
  2. null (Ed.)
    When objects from two categories of expertise (e.g., faces and cars in dual car/face experts) are processed simultaneously, competition occurs across a variety of tasks. Here, we investigate whether competition between face and car processing also occurs during ensemble coding. The relationship between single object recognition and ensemble coding is debated, but if ensemble coding relies on the same ability as object recognition, we expect cars to interfere with ensemble coding of faces as a function of car expertise. We measured the ability to judge the variability in identity of arrays of faces, in the presence of task irrelevant distractors (cars or novel objects). On each trial, participants viewed two sequential arrays containing four faces and four distractors, judging which array was the more diverse in terms of face identity. We measured participants’ car expertise, object recognition ability, and face recognition ability. Using Bayesian statistics, we found evidence against competition as a function of car expertise during ensemble coding of faces. Face recognition ability predicted ensemble judgments for faces, regardless of the category of task-irrelevant distractors. The result suggests that ensemble coding is not susceptible to competition between different domains of similar expertise, unlike single-object recognition. 
    more » « less
  3. Attention allows us to select relevant and ignore irrelevant information from our complex environments. What happens when attention shifts from one item to another? To answer this question, it is critical to have tools that accurately recover neural representations of both feature and location information with high temporal resolution. In the present study, we used human electroencephalography (EEG) and machine learning to explore how neural representations of object features and locations update across dynamic shifts of attention. We demonstrate that EEG can be used to create simultaneous time courses of neural representations of attended features (time point-by-time point inverted encoding model reconstructions) and attended location (time point-by-time point decoding) during both stable periods and across dynamic shifts of attention. Each trial presented two oriented gratings that flickered at the same frequency but had different orientations; participants were cued to attend one of them and on half of trials received a shift cue midtrial. We trained models on a stable period from Hold attention trials and then reconstructed/decoded the attended orientation/location at each time point on Shift attention trials. Our results showed that both feature reconstruction and location decoding dynamically track the shift of attention and that there may be time points during the shifting of attention when 1) feature and location representations become uncoupled and 2) both the previously attended and currently attended orientations are represented with roughly equal strength. The results offer insight into our understanding of attentional shifts, and the noninvasive techniques developed in the present study lend themselves well to a wide variety of future applications. NEW & NOTEWORTHY We used human EEG and machine learning to reconstruct neural response profiles during dynamic shifts of attention. Specifically, we demonstrated that we could simultaneously read out both location and feature information from an attended item in a multistimulus display. Moreover, we examined how that readout evolves over time during the dynamic process of attentional shifts. These results provide insight into our understanding of attention, and this technique carries substantial potential for versatile extensions and applications. 
    more » « less
  4. null (Ed.)
    Humans demonstrate enhanced processing of human faces compared with animal faces, known as own-species bias. This bias is important for identifying people who may cause harm, as well as for recognizing friends and kin. However, growing evidence also indicates a more general face bias. Faces have high evolutionary importance beyond conspecific interactions, as they aid in detecting predators and prey. Few studies have explored the interaction of these biases together. In three experiments, we explored processing of human and animal faces, compared with each other and to nonface objects, which allowed us to examine both own-species and broader face biases. We used a dot-probe paradigm to examine human adults’ covert attentional biases for task-irrelevant human faces, animal faces, and objects. We replicated the own-species attentional bias for human faces relative to animal faces. We also found an attentional bias for animal faces relative to objects, consistent with the proposal that faces broadly receive privileged processing. Our findings suggest that humans may be attracted to a broad class of faces. Further, we found that while participants rapidly attended to human faces across all cue display durations, they attended to animal faces only when they had sufficient time to process them. Our findings reveal that the dot-probe paradigm is sensitive for capturing both own-species and more general face biases, and that each has a different attentional signature, possibly reflecting their unique but overlapping evolutionary importance. 
    more » « less
  5. Abstract

    Selective attention improves sensory processing of relevant information but can also impact the quality of perception. For example, attention increases visual discrimination performance and at the same time boosts apparent stimulus contrast of attended relative to unattended stimuli. Can attention also lead to perceptual distortions of visual representations? Optimal tuning accounts of attention suggest that processing is biased towards “off-tuned” features to maximize the signal-to-noise ratio in favor of the target, especially when targets and distractors are confusable. Here, we tested whether such tuning gives rise to phenomenological changes of visual features. We instructed participants to select a color among other colors in a visual search display and subsequently asked them to judge the appearance of the target color in a 2-alternative forced choice task. Participants consistently judged the target color to appear more dissimilar from the distractor color in feature space. Critically, the magnitude of these perceptual biases varied systematically with the similarity between target and distractor colors during search, indicating that attentional tuning quickly adapts to current task demands. In control experiments we rule out possible non-attentional explanations such as color contrast or memory effects. Overall, our results demonstrate that selective attention warps the representational geometry of color space, resulting in profound perceptual changes across large swaths of feature space. Broadly, these results indicate that efficient attentional selection can come at a perceptual cost by distorting our sensory experience.

     
    more » « less