skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Simultaneous acquisition of multiple auditory-motor transformations reveals supra-syllabic motor planning in speech production
Motor planning forms a critical bridge between psycholinguistic and motoric models of word production. While syllables are often considered the core planning unit in speech, growing evidence hints at supra-syllabic planning, but without firm empirical support. Here, we use differential adaptation to altered auditory feedback to provide novel, straightforward evidence for word-level planning. By introducing opposing perturbations to shared segmental content (e.g., raising the first vowel formant of “sev” in “seven” while lowering it in “sever”), we assess whether participants can use the larger word context to separately oppose the two perturbations. Critically, limb control research shows that such differential learning is possible only when the shared movement forms part of separate motor plans. We found differential adaptation in multisyllabic words, but of smaller size relative to monosyllabic words. These results strongly suggest speech relies on an interactive motor planning process encompassing both syllables and words.  more » « less
Award ID(s):
2120506
PAR ID:
10520581
Author(s) / Creator(s):
; ;
Publisher / Repository:
OSF
Date Published:
Edition / Version:
5
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Co-speech gestures are timed to occur with prosodically prominent syllables in several languages. In prior work in Indo-European languages, gestures are found to be attracted to stressed syllables, with gesture apexes preferentially aligning with syllables bearing higher and more dynamic pitch accents. Little research has examined the temporal alignment of co-speech gestures in African tonal languages, where metrical prominence is often hard to identify due to a lack of canonical stress correlates, and where a key function of pitch is in distinguishing between words, rather than marking intonational prominence. Here, we examine the alignment of co-speech gestures in two different Niger-Congo languages with very different word structures, Medʉmba (Grassfields Bantu, Cameroon) and Igbo (Igboid, Nigeria). Our findings suggest that the initial position in the stem tends to attract gestures in Medʉmba, while the final syllable in the word is the default position for gesture alignment in Igbo; phrase position also influences gesture alignment, but in language-specific ways. Though neither language showed strong evidence of elevated prominence of any individual tone value, gesture patterning in Igbo suggests that metrical structure at the level of the tonal foot is relevant to the speech-gesture relationship. Our results demonstrate how the speech-gesture relationship can be a window into patterns of word- and phrase-level prosody cross-linguistically. They also show that the relationship between gesture and tone (and the related notion of ‘tonal prominence’) is mediated by tone’s function in a language.  
    more » « less
  2. Background/Objectives: The Implicit Prosody Hypothesis (IPH) posits that individuals generate internal prosodic representations during silent reading, mirroring those produced in spoken language. While converging behavioral evidence supports the IPH, the underlying neurocognitive mechanisms remain largely unknown. Therefore, this study investigated the neurophysiological markers of sensitivity to speech rhythm cues during silent word reading. Methods: EEGs were recorded while participants silently read four-word sequences, each composed of either trochaic words (stressed on the first syllable) or iambic words (stressed on the second syllable). Each sequence was followed by a target word that was either metrically congruent or incongruent with the preceding rhythmic pattern. To investigate the effects of metrical expectancy and lexical stress type, we examined single-trial event-related potentials (ERPs) and time–frequency representations (TFRs) time-locked to target words. Results: The results showed significant differences based on the stress pattern expectancy and type. Specifically, words that carried unexpected stress elicited larger ERP negativities between 240 and 628 ms after the word onset. Furthermore, different frequency bands were sensitive to distinct aspects of the rhythmic structure in language. Alpha activity tracked the rhythmic expectations, and theta and beta activities were sensitive to both the expected rhythms and specific locations of the stressed syllables. Conclusions: The findings clarify neurocognitive mechanisms of phonological and lexical mental representations during silent reading using a conservative data-driven approach. Similarity with neural response patterns previously reported for spoken language contexts suggests shared neural networks for implicit and explicit speech rhythm processing, further supporting the IPH and emphasizing the centrality of prosody in reading. 
    more » « less
  3. Humans rarely speak without producing co-speech gestures of the hands, head, and other parts of the body. Co-speech gestures are also highly restricted in how they are timed with speech, typically synchronizing with prosodically-prominent syllables. What functional principles underlie this relationship? Here, we examine how the production of co-speech manual gestures influences spatiotemporal patterns of the oral articulators during speech production. We provide novel evidence that words uttered with accompanying co-speech gestures are produced with more extreme tongue and jaw displacement, and that presence of a co-speech gesture contributes to greater temporal stability of oral articulatory movements. This effect–which we term coupling enhancement–differs from stress-based hyperarticulation in that differences in articulatory magnitude are not vowel-specific in their patterning. Speech and gesture synergies therefore constitute an independent variable to consider when modeling the effects of prosodic prominence on articulatory patterns. Our results are consistent with work in language acquisition and speech-motor control suggesting that synchronizing speech to gesture can entrain acoustic prominence. 
    more » « less
  4. Abstract The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the visual word form area. Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using functional magnetic resonance imaging rapid adaptation techniques, we provide evidence for an auditory lexicon in the auditory word form area in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the auditory word form area. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams. 
    more » « less
  5. In this paper, we present MuteIt, an ear-worn system for recognizing unvoiced human commands. MuteIt presents an intuitive alternative to voice-based interactions that can be unreliable in noisy environments, disruptive to those around us, and compromise our privacy. We propose a twin-IMU set up to track the user's jaw motion and cancel motion artifacts caused by head and body movements. MuteIt processes jaw motion during word articulation to break each word signal into its constituent syllables, and further each syllable into phonemes (vowels, visemes, and plosives). Recognizing unvoiced commands by only tracking jaw motion is challenging. As a secondary articulator, jaw motion is not distinctive enough for unvoiced speech recognition. MuteIt combines IMU data with the anatomy of jaw movement as well as principles from linguistics, to model the task of word recognition as an estimation problem. Rather than employing machine learning to train a word classifier, we reconstruct each word as a sequence of phonemes using a bi-directional particle filter, enabling the system to be easily scaled to a large set of words. We validate MuteIt for 20 subjects with diverse speech accents to recognize 100 common command words. MuteIt achieves a mean word recognition accuracy of 94.8% in noise-free conditions. When compared with common voice assistants, MuteIt outperforms them in noisy acoustic environments, achieving higher than 90% recognition accuracy. Even in the presence of motion artifacts, such as head movement, walking, and riding in a moving vehicle, MuteIt achieves mean word recognition accuracy of 91% over all scenarios. 
    more » « less