This content will become publicly available on May 1, 2026
Title: A left-lateralized dorsolateral prefrontal network for naming
The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dy- namics supporting this process relevant to daily auditory discourse remain poorly understood. Here, we re- corded neurosurgical electrocorticography (ECoG) data from 48 patients and dissociated two key language networks that highly overlap in time and space, critical for word retrieval. Using unsupervised temporal clus- tering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was invariant to input modalities. Functionally, we confirmed that the semantic processing network en- codes word surprisal during sentence perception. These findings elucidate neurophysiological mechanisms underlying the processing of semantic auditory inputs ranging from passive language comprehension to conversational speech. more »« less
Conrad, Benjamin N.; Wilkey, Eric D.; Yeo, Darren J.; Price, Gavin R.
(, Network Neuroscience)
null
(Ed.)
Author Summary Previous studies of local activity levels suggest that both shared and distinct neural mechanisms support the processing of symbolic (Arabic digits) and nonsymbolic (dot sets) number stimuli, involving regions distributed across frontal, temporal, and parietal cortices. Network-level characterizations of functional connectivity patterns underlying number processing have gone unexplored, however. In this study we examined the whole-brain functional architecture of symbolic and nonsymbolic number comparison. Stronger community membership was observed among auditory regions during symbolic processing, and among cingulo-opercular/salience and basal ganglia networks for nonsymbolic. A dual versus unified fronto-parietal/dorsal attention community organization was observed for symbolic and nonsymbolic formats, respectively. Finally, the inferior temporal gyrus and left intraparietal sulcus, both thought to be preferentially involved in processing number symbols, demonstrated robust differences in community membership between formats.
Abstract Statistical learning (SL) is the ability to detect and learn regularities from input and is foundational to language acquisition. Despite the dominant role of SL as a theoretical construct for language development, there is a lack of direct evidence supporting the shared neural substrates underlying language processing and SL. It is also not clear whether the similarities, if any, are related to linguistic processing, or statistical regularities in general. The current study tests whether the brain regions involved in natural language processing are similarly recruited during auditory, linguistic SL. Twenty-two adults performed an auditory linguistic SL task, an auditory nonlinguistic SL task, and a passive story listening task as their neural activation was monitored. Within the language network, the left posterior temporal gyrus showed sensitivity to embedded speech regularities during auditory, linguistic SL, but not auditory, nonlinguistic SL. Using a multivoxel pattern similarity analysis, we uncovered similarities between the neural representation of auditory, linguistic SL, and language processing within the left posterior temporal gyrus. No other brain regions showed similarities between linguistic SL and language comprehension, suggesting that a shared neurocomputational process for auditory SL and natural language processing within the left posterior temporal gyrus is specific to linguistic stimuli.
Li, Jixing; Lai, Marco; Pylkkänen, Liina
(, Imaging Neuroscience)
Abstract Naturalistic paradigms using movies or audiobooks have become increasingly popular in cognitive neuroscience, but connecting them to findings from controlled experiments remains rare. Here, we aim to bridge this gap in the context of semantic composition in language processing, which is typically examined using a “minimal” two-word paradigm. Using magnetoencephalography (MEG), we investigated whether the neural signatures of semantic composition observed in an auditory two-word paradigm can extend to naturalistic story listening, and vice versa. Our results demonstrate consistent differentiation between phrases and single nouns in the left anterior and middle temporal lobe, regardless of the context. Notably, this distinction emerged later during naturalistic listening. Yet this latency difference disappeared when accounting for various factors in the naturalistic data, such as prosody, word rate, word frequency, surprisal, and emotional content. These findings suggest the presence of a unified compositional process underlying both isolated and connected speech comprehension.
Damera, Srikanth R.; Chang, Lillian; Nikolov, Plamen P.; Mattei, James A.; Banerjee, Suneel; Glezer, Laurie S.; Cox, Patrick H.; Jiang, Xiong; Rauschecker, Josef P.; Riesenhuber, Maximilian
(, Neurobiology of Language)
Abstract The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the visual word form area. Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using functional magnetic resonance imaging rapid adaptation techniques, we provide evidence for an auditory lexicon in the auditory word form area in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the auditory word form area. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
Damera, Srikanth R.; Malone, Patrick S.; Stevens, Benson W.; Klein, Richard; Eberhardt, Silvio P.; Auer, Edward T.; Bernstein, Lynne E.; Riesenhuber, Maximilian
(, The Journal of Neuroscience)
It has been postulated that the brain is organized by “metamodal,” sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both “standard” and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain. SIGNIFICANCE STATEMENTIt has been proposed that the brain is organized by “metamodal,” sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals “to see” by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.
@article{osti_10638302,
place = {Country unknown/Code not available},
title = {A left-lateralized dorsolateral prefrontal network for naming},
url = {https://par.nsf.gov/biblio/10638302},
DOI = {10.1016/j.celrep.2025.115677},
abstractNote = {The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dy- namics supporting this process relevant to daily auditory discourse remain poorly understood. Here, we re- corded neurosurgical electrocorticography (ECoG) data from 48 patients and dissociated two key language networks that highly overlap in time and space, critical for word retrieval. Using unsupervised temporal clus- tering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was invariant to input modalities. Functionally, we confirmed that the semantic processing network en- codes word surprisal during sentence perception. These findings elucidate neurophysiological mechanisms underlying the processing of semantic auditory inputs ranging from passive language comprehension to conversational speech.},
journal = {Cell Reports},
volume = {44},
number = {5},
publisher = {Cell},
author = {Yu, Leyao and Dugan, Patricia and Doyle, Werner and Devinsky, Orrin and Friedman, Daniel and Flinker, Adeen},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.