skip to main content


Title: Bihemispheric network dynamics coordinating vocal feedback control
Abstract

Modulation of vocal pitch is a key speech feature that conveys important linguistic and affective information. Auditory feedback is used to monitor and maintain pitch. We examined induced neural high gamma power (HGP) (65–150 Hz) using magnetoencephalography during pitch feedback control. Participants phonated into a microphone while hearing their auditory feedback through headphones. During each phonation, a single real‐time 400 ms pitch shift was applied to the auditory feedback. Participants compensated by rapidly changing their pitch to oppose the pitch shifts. This behavioral change required coordination of the neural speech motor control network, including integration of auditory and somatosensory feedback to initiate change in motor plans. We found increases in HGP across both hemispheres within 200 ms of pitch shifts, covering left sensory and right premotor, parietal, temporal, and frontal regions, involved in sensory detection and processing of the pitch shift. Later responses to pitch shifts (200–300 ms) were right dominant, in parietal, frontal, and temporal regions. Timing of activity in these regions indicates their role in coordinating motor change and detecting and processing of the sensory consequences of this change. Subtracting out cortical responses during passive listening to recordings of the phonations isolated HGP increases specific to speech production, highlighting right parietal and premotor cortex, and left posterior temporal cortex involvement in the motor response. Correlation of HGP with behavioral compensation demonstrated right frontal region involvement in modulating participant's compensatory response. This study highlights the bihemispheric sensorimotor cortical network involvement in auditory feedback‐based control of vocal pitch.Hum Brain Mapp 37:1474‐1485, 2016. © 2016 Wiley Periodicals, Inc.

 
more » « less
NSF-PAR ID:
10238212
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Human Brain Mapping
Volume:
37
Issue:
4
ISSN:
1065-9471
Page Range / eLocation ID:
p. 1474-1485
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Accurate integration of sensory inputs and motor commands is essential to achieve successful behavioral goals. A robust model of sensorimotor integration is the pitch perturbation response, in which speakers respond rapidly to shifts of the pitch in their auditory feedback. In a previous study, we demonstrated abnormal sensorimotor integration in patients with Alzheimer’s disease (AD) with an abnormally enhanced behavioral response to pitch perturbation. Here we examine the neural correlates of the abnormal pitch perturbation response in AD patients, using magnetoencephalographic imaging. The participants phonated the vowel /α/ while a real-time signal processor briefly perturbed the pitch (100 cents, 400 ms) of their auditory feedback. We examined the high-gamma band (65–150 Hz) responses during this task. AD patients showed significantly reduced left prefrontal activity during the early phase of perturbation and increased right middle temporal activity during the later phase of perturbation, compared to controls. Activity in these brain regions significantly correlated with the behavioral response. These results demonstrate that impaired prefrontal modulation of speech-motor-control network and additional recruitment of right temporal regions are significant mediators of aberrant sensorimotor integration in patients with AD. The abnormal neural integration mechanisms signify the contribution of cortical network dysfunction to cognitive and behavioral deficits in AD.

     
    more » « less
  2. null (Ed.)
    Abstract Neurobiological models of speech perception posit that both left and right posterior temporal brain regions are involved in the early auditory analysis of speech sounds. However, frank deficits in speech perception are not readily observed in individuals with right hemisphere damage. Instead, damage to the right hemisphere is often associated with impairments in vocal identity processing. Herein lies an apparent paradox: The mapping between acoustics and speech sound categories can vary substantially across talkers, so why might right hemisphere damage selectively impair vocal identity processing without obvious effects on speech perception? In this review, I attempt to clarify the role of the right hemisphere in speech perception through a careful consideration of its role in processing vocal identity. I review evidence showing that right posterior superior temporal, right anterior superior temporal, and right inferior / middle frontal regions all play distinct roles in vocal identity processing. In considering the implications of these findings for neurobiological accounts of speech perception, I argue that the recruitment of right posterior superior temporal cortex during speech perception may specifically reflect the process of conditioning phonetic identity on talker information. I suggest that the relative lack of involvement of other right hemisphere regions in speech perception may be because speech perception does not necessarily place a high burden on talker processing systems, and I argue that the extant literature hints at potential subclinical impairments in the speech perception abilities of individuals with right hemisphere damage. 
    more » « less
  3. null (Ed.)
    Aging is associated with an exaggerated representation of the speech envelope in auditory cortex. The relationship between this age-related exaggerated response and a listener’s ability to understand speech in noise remains an open question. Here, information-theory-based analysis methods are applied to magnetoencephalography recordings of human listeners, investigating their cortical responses to continuous speech, using the novel nonlinear measure of phase-locked mutual information between the speech stimuli and cortical responses. The cortex of older listeners shows an exaggerated level of mutual information, compared with younger listeners, for both attended and unattended speakers. The mutual information peaks for several distinct latencies: early (∼50 ms), middle (∼100 ms), and late (∼200 ms). For the late component, the neural enhancement of attended over unattended speech is affected by stimulus signal-to-noise ratio, but the direction of this dependency is reversed by aging. Critically, in older listeners and for the same late component, greater cortical exaggeration is correlated with decreased behavioral inhibitory control. This negative correlation also carries over to speech intelligibility in noise, where greater cortical exaggeration in older listeners is correlated with worse speech intelligibility scores. Finally, an age-related lateralization difference is also seen for the ∼100 ms latency peaks, where older listeners show a bilateral response compared with younger listeners’ right lateralization. Thus, this information-theory-based analysis provides new, and less coarse-grained, results regarding age-related change in auditory cortical speech processing, and its correlation with cognitive measures, compared with related linear measures. NEW & NOTEWORTHY Cortical representations of natural speech are investigated using a novel nonlinear approach based on mutual information. Cortical responses, phase-locked to the speech envelope, show an exaggerated level of mutual information associated with aging, appearing at several distinct latencies (∼50, ∼100, and ∼200 ms). Critically, for older listeners only, the ∼200 ms latency response components are correlated with specific behavioral measures, including behavioral inhibition and speech comprehension. 
    more » « less
  4. The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the “dorsal stream”) to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus (IFG), the middle temporal gyrus (MTG), and the supplementary motor area (SMA) provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning. 
    more » « less
  5. N100, the negative peak of electrical response occurring around 100 ms, is present in diverse functional paradigms including auditory, visual, somatic, behavioral and cognitive tasks. We hypothesized that the presence of the N100 across different paradigms may be indicative of a more general property of the cerebral cortex regardless of functional or anatomic specificity. To test this hypothesis, we combined transcranial magnetic stimulation (TMS) and electroencephalography (EEG) to measure cortical excitability by TMS across cortical regions without relying on specific sensory, cognitive or behavioral modalities. The five stimulated regions included left prefrontal, left motor, left primary auditory cortices, the vertex and posterior cerebellum with stimulations performed using supra- and subthreshold intensities. EEG responses produced by TMS stimulation at the five locations all generated N100s that peaked at the vertex. The amplitudes of the N100s elicited by these five diverse cortical origins were statistically not significantly different (all uncorrected p > 0.05). No other EEG response components were found to have this global property of N100. Our findings suggest that anatomy- and modality-specific interpretation of N100 should be carefully evaluated, and N100 by TMS may be used as a bio-marker for evaluating local versus general cortical properties across the brain. 
    more » « less