skip to main content


Title: Reactive Inhibitory Control Precedes Overt Stuttering Events
Abstract

Research points to neurofunctional differences underlying fluent speech between stutterers and non-stutterers. Considerably less work has focused on processes that underlie stuttered vs. fluent speech. Additionally, most of this research has focused on speech motor processes despite contributions from cognitive processes prior to the onset of stuttered speech. We used MEG to test the hypothesis that reactive inhibitory control is triggered prior to stuttered speech. Twenty-nine stutterers completed a delayed-response task that featured a cue (prior to a go cue) signaling the imminent requirement to produce a word that was either stuttered or fluent. Consistent with our hypothesis, we observed increased beta power likely emanating from the R-preSMA—an area implicated in reactive inhibitory control—in response to the cue preceding stuttered vs. fluent productions. Beta power differences between stuttered and fluent trials correlated with stuttering severity and participants' percentage of trials stuttered increased exponentially with beta power in the R-preSMA. Trial-by-trial beta power modulations in the R-preSMA following the cue predicted whether a trial would be stuttered or fluent. Stuttered trials were also associated with delayed speech onset suggesting an overall slowing or freezing of the speech motor system that may be a consequence of inhibitory control. Post-hoc analyses revealed that independently-generated anticipated words were associated with greater beta power and more stuttering than researcher-assisted anticipated words, pointing to a relationship between self-perceived likelihood of stuttering (i.e., anticipation) and inhibitory control. This work offers a neurocognitive account of stuttering by characterizing cognitive processes that precede overt stuttering events.

 
more » « less
NSF-PAR ID:
10491159
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
DOI PREFIX: 10.1162
Date Published:
Journal Name:
Neurobiology of Language
ISSN:
2641-4368
Format(s):
Medium: X Size: p. 1-49
Size(s):
["p. 1-49"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Bizley, Jennifer K. (Ed.)

    Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.

     
    more » « less
  2. The stop signal task (SST) is the gold standard experimental model of inhibitory control. However, neither SST condition–contrast (stop vs. go, successful vs. failed stop) purely operationalizes inhibition. Because stop trials include a second, infrequent signal, the stop versus go contrast confounds inhibition with attentional and stimulus processing demands. While this confound is controlled for in the successful versus failed stop contrast, the go process is systematically faster on failed stop trials, contaminating the contrast with a different noninhibitory confound. Here, we present an SST variant to address both confounds and evaluate putative neural indices of inhibition with these influences removed. In our variant, stop signals occurred on every trial, equating the noninhibitory demands of the stop versus go contrast. To entice participants to respond despite the impending stop signals, responses produced before stop signals were rewarded. This also reversed the go process bias that typically affects the successful versus failed stop contrast. We recorded scalp electroencephalography in this new version of the task (as well as a standard version of the SST with infrequent stop signal) and found that, even under these conditions, the properties of the frontocentral stop signal P3 ERP remained consistent with the race model. Specifically, in both tasks, the amplitude of the P3 was increased on stop versus go trials. Moreover, the onset of this P3 occurred earlier for successful compared with failed stop trials in both tasks, consistent with the proposal of the race model that an earlier start of the inhibition process will increase stopping success. Therefore, the frontocentral stop signal P3 represents a neural process whose properties are in line with the predictions of the race model of motor inhibition, even when the SST's confounds are controlled. 
    more » « less
  3. Abstract

    A longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus.

     
    more » « less
  4. Abstract

    During language processing, people make rapid use of contextual information to promote comprehension of upcoming words. When new words are learned implicitly, information contained in the surrounding context can provide constraints on their possible meaning. In the current study, EEG was recorded as participants listened to a series of three sentences, each containing an identical target pseudoword, with the aim of using contextual information in the surrounding language to identify a meaning representation for the novel word. In half of the trials, sentences were semantically coherent so that participants could develop a single representation for the novel word that fit all contexts. Other trials contained unrelated sentence contexts so that meaning associations were not possible. We observed greater theta band enhancement over the left hemisphere across central and posterior electrodes in response to pseudowords processed across semantically related compared to unrelated contexts. Additionally, relative alpha and beta band suppression was increased prior to pseudoword onset in trials where contextual information more readily promoted pseudoword meaning associations. Under the hypothesis that theta enhancement indexes processing demands during lexical access, the current study provides evidence for selective online memory retrieval for novel words learned implicitly in a spoken context.

     
    more » « less
  5. Purpose: Stuttering-like disfluencies (SLDs) and typical disfluencies (TDs) are both more likely to occur as utterance length increases. However, longer and shorter utterances differ by more than the number of morphemes: They may also serve different communicative functions or describe different ideas. Decontextualized language, or language that describes events and concepts outside of the “here and now,” is associated with longer utterances. Prior work has shown that language samples taken in decontextualized contexts contain more disfluencies, but averaging across an entire language sample creates a confound between utterance length and decontextualization as contributors to stuttering. We coded individual utterances from naturalistic play samples to test the hypothesis that decontextualized language leads to increased disfluencies above and beyond the effects of utterance length. Method: We used archival transcripts of language samples from 15 preschool children who stutter (CWS) and 15 age- and sex-matched children who do not stutter (CWNS). Utterances were coded as either contextualized or decontextualized, and we used mixed-effects logistic regression to investigate the impact of utterance length and decontextualization on SLDs and TDs. Results: CWS were more likely to stutter when producing decontextualized utterances, even when controlling for utterance length. An interaction between decontextualization and utterance length indicated that the effect of decontextualization was greatest for shorter utterances. TDs increased in decontextualized utterances when controlling for utterance length for both CWS and CWNS. The effect of decontextualization on TDs did not differ statistically between the two groups. Conclusions: The increased working memory demands associated with decontextualized language contribute to increased language planning effort. This leads to increased TD in CWS and CWNS. Under a multifactorial dynamic model of stuttering, the increased language demands may also contribute to increased stuttering in CWS due to instabilities in their speech motor systems. 
    more » « less