skip to main content


Title: Parallel spatial channels converge at a bottleneck in anterior word-selective cortex

In most environments, the visual system is confronted with many relevant objects simultaneously. That is especially true during reading. However, behavioral data demonstrate that a serial bottleneck prevents recognition of more than one word at a time. We used fMRI to investigate how parallel spatial channels of visual processing converge into a serial bottleneck for word recognition. Participants viewed pairs of words presented simultaneously. We found that retinotopic cortex processed the two words in parallel spatial channels, one in each contralateral hemisphere. Responses were higher for attended than for ignored words but were not reduced when attention was divided. We then analyzed two word-selective regions along the occipitotemporal sulcus (OTS) of both hemispheres (subregions of the visual word form area, VWFA). Unlike retinotopic regions, each word-selective region responded to words on both sides of fixation. Nonetheless, a single region in the left hemisphere (posterior OTS) contained spatial channels for both hemifields that were independently modulated by selective attention. Thus, the left posterior VWFA supports parallel processing of multiple words. In contrast, activity in a more anterior word-selective region in the left hemisphere (mid OTS) was consistent with a single channel, showing (i) limited spatial selectivity, (ii) no effect of spatial attention on mean response amplitudes, and (iii) sensitivity to lexical properties of only one attended word. Therefore, the visual system can process two words in parallel up to a late stage in the ventral stream. The transition to a single channel is consistent with the observed bottleneck in behavior.

 
more » « less
NSF-PAR ID:
10090469
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Proceedings of the National Academy of Sciences
Date Published:
Journal Name:
Proceedings of the National Academy of Sciences
ISSN:
0027-8424
Page Range / eLocation ID:
Article No. 201822137
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Key points

    Visual attention involves discrete multispectral oscillatory responses in visual and ‘higher‐order’ prefrontal cortices.

    Prefrontal cortex laterality effects during visual selective attention are poorly characterized.

    High‐definition transcranial direct current stimulation dynamically modulated right‐lateralized fronto‐visual theta oscillations compared to those observed in left fronto‐visual pathways.

    Increased connectivity in right fronto‐visual networks after stimulation of the left dorsolateral prefrontal cortex resulted in faster task performance in the context of distractors.

    Our findings show clear laterality effects in theta oscillatory activity along prefrontal–visual cortical pathways during visual selective attention.

    Abstract

    Studies of visual attention have implicated oscillatory activity in the recognition, protection and temporal organization of attended representations in visual cortices. These studies have also shown that higher‐order regions such as the prefrontal cortex are critical to attentional processing, but far less is understood regarding prefrontal laterality differences in attention processing. To examine this, we selectively applied high‐definition transcranial direct current stimulation (HD‐tDCS) to the left or right dorsolateral prefrontal cortex (DLPFC). We predicted that HD‐tDCS of the leftversusright prefrontal cortex would differentially modulate performance on a visual selective attention task, and alter the underlying oscillatory network dynamics. Our randomized crossover design included 27 healthy adults that underwent three separate sessions of HD‐tDCS (sham, left DLPFC and right DLPFC) for 20 min. Following stimulation, participants completed an attention protocol during magnetoencephalography. The resulting oscillatory dynamics were imaged using beamforming, and peak task‐related neural activity was subjected to dynamic functional connectivity analyses to evaluate the impact of stimulation site (i.e. left and right DLPFC) on neural interactions. Our results indicated that HD‐tDCS over the left DLPFC differentially modulated right fronto‐visual functional connectivity within the theta band compared to HD‐tDCS of the right DLPFC and further, specifically modulated the oscillatory response for detecting targets among an array of distractors. Importantly, these findings provide network‐specific insight into the complex oscillatory mechanisms serving visual selective attention.

     
    more » « less
  2. Abstract

    During language processing, people make rapid use of contextual information to promote comprehension of upcoming words. When new words are learned implicitly, information contained in the surrounding context can provide constraints on their possible meaning. In the current study, EEG was recorded as participants listened to a series of three sentences, each containing an identical target pseudoword, with the aim of using contextual information in the surrounding language to identify a meaning representation for the novel word. In half of the trials, sentences were semantically coherent so that participants could develop a single representation for the novel word that fit all contexts. Other trials contained unrelated sentence contexts so that meaning associations were not possible. We observed greater theta band enhancement over the left hemisphere across central and posterior electrodes in response to pseudowords processed across semantically related compared to unrelated contexts. Additionally, relative alpha and beta band suppression was increased prior to pseudoword onset in trials where contextual information more readily promoted pseudoword meaning associations. Under the hypothesis that theta enhancement indexes processing demands during lexical access, the current study provides evidence for selective online memory retrieval for novel words learned implicitly in a spoken context.

     
    more » « less
  3. In visual word recognition, having more orthographic neighbors (words that differ by a single letter) generally speeds access to a target word. But neighbors can mismatch at any letter position. In light of evidence that information content varies between letter positions, we consider how neighbor effects might vary across letter positions. Results from a word naming task indicate that response latencies are better predicted by the relative number of positional friends and enemies (respectively, neighbors that match the target at a given letter position and those that mismatch) at some letter positions than at others. In particular, benefits from friends are most pronounced at positions associated with low a priori uncertainty (positional entropy). We consider how these results relate to previous accounts of position-specific effects and how such effects might emerge in serial and parallel processing systems. 
    more » « less
  4. Abstract

    Visual word recognition is facilitated by the presence oforthographic neighborsthat mismatch the target word by a single letter substitution. However, researchers typically do not considerwhereneighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number ofenemiesat each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over‐time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions.

     
    more » « less
  5. Abstract

    The grouping of sensory stimuli into categories is fundamental to cognition. Previous research in the visual and auditory systems supports a two‐stage processing hierarchy that underlies perceptual categorization: (a) a “bottom‐up” perceptual stage in sensory cortices where neurons show selectivity for stimulus features and (b) a “top‐down” second stage in higher level cortical areas that categorizes the stimulus‐selective input from the first stage. In order to test the hypothesis that the two‐stage model applies to the somatosensory system, 14 human participants were trained to categorize vibrotactile stimuli presented to their right forearm. Then, during an fMRI scan, participants actively categorized the stimuli. Representational similarity analysis revealed stimulus selectivity in areas including the left precentral and postcentral gyri, the supramarginal gyrus, and the posterior middle temporal gyrus. Crucially, we identified a single category‐selective region in the left ventral precentral gyrus. Furthermore, an estimation of directed functional connectivity delivered evidence for robust top‐down connectivity from the second to first stage. These results support the validity of the two‐stage model of perceptual categorization for the somatosensory system, suggesting common computational principles and a unified theory of perceptual categorization across the visual, auditory, and somatosensory systems.

     
    more » « less