skip to main content

Search for: All records

Award ID contains: 1734892

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Voice pitch carries linguistic as well as non-linguistic information. Previous studies have described cortical tracking of voice pitch in clean speech, with responses reflecting both pitch strength and pitch value. However, pitch is also a powerful cue for auditory stream segregation, especially when competing streams have pitch differing in fundamental frequency, as is the case when multiple speakers talk simultaneously. We therefore investigated how cortical speech pitch tracking is affected in the presence of a second, task-irrelevant speaker. We analyzed human magnetoencephalography (MEG) responses to continuous narrative speech, presented either as a single talker in a quiet background, or as a two-talker mixture of a male and a female speaker. In clean speech, voice pitch was associated with a right-dominant response, peaking at a latency of around 100 ms, consistent with previous EEG and ECoG results. The response tracked both the presence of pitch as well as the relative value of the speaker’s fundamental frequency. In the two-talker mixture, pitch of the attended speaker was tracked bilaterally, regardless of whether or not there was simultaneously present pitch in the speech of the irrelevant speaker. Pitch tracking for the irrelevant speaker was reduced: only the right hemisphere still significantly tracked pitchmore »of the unattended speaker, and only during intervals in which no pitch was present in the attended talker’s speech. Taken together, these results suggest that pitch-based segregation of multiple speakers, at least as measured by macroscopic cortical tracking, is not entirely automatic but strongly dependent on selective attention.« less
    Free, publicly-accessible full text available July 8, 2023
  2. Objective: The Temporal Response Function (TRF) is a linear model of neural activity time-locked to continuous stimuli, including continuous speech. TRFs based on speech envelopes typically have distinct components that have provided remarkable insights into the cortical processing of speech. However, current methods may lead to less than reliable estimates of single-subject TRF components. Here, we compare two established methods, in TRF component estimation, and also propose novel algorithms that utilize prior knowledge of these components, bypassing the full TRF estimation. Methods: We compared two established algorithms, ridge and boosting, and two novel algorithms based on Subspace Pursuit (SP) and Expectation Maximization (EM), which directly estimate TRF components given plausible assumptions regarding component characteristics. Single-channel, multi-channel, and source-localized TRFs were fit on simulations and real magnetoencephalographic data. Performance metrics included model fit and component estimation accuracy. Results: Boosting and ridge have comparable performance in component estimation. The novel algorithms outperformed the others in simulations, but not on real data, possibly due to the plausible assumptions not actually being met. Ridge had slightly better model fits on real data compared to boosting, but also more spurious TRF activity. Conclusion: Results indicate that both smooth (ridge) and sparse (boosting) algorithms perform comparablymore »at TRF component estimation. The SP and EM algorithms may be accurate, but rely on assumptions of component characteristics. Significance: This systematic comparison establishes the suitability of widely used and novel algorithms for estimating robust TRF components, which is essential for improved subject-specific investigations into the cortical processing of speech.« less
    Free, publicly-accessible full text available June 21, 2023
  3. Stroke patients with hemiparesis display decreased beta band (13–25 Hz) rolandic activity, correlating to impaired motor function. However, clinically, patients without significant weakness, with small lesions far from sensorimotor cortex, exhibit bilateral decreased motor dexterity and slowed reaction times. We investigate whether these minor stroke patients also display abnormal beta band activity. Magnetoencephalographic (MEG) data were collected from nine minor stroke patients (NIHSS < 4) without significant hemiparesis, at ~1 and ~6 months postinfarct, and eight age-similar controls. Rolandic relative beta power during matching tasks and resting state, and Beta Event Related (De)Synchronization (ERD/ERS) during button press responses were analyzed. Regardless of lesion location, patients had significantly reduced relative beta power and ERS compared to controls. Abnormalities persisted over visits, and were present in both ipsi- and contra-lesional hemispheres, consistent with bilateral impairments in motor dexterity and speed. Minor stroke patients without severe weakness display reduced rolandic beta band activity in both hemispheres, which may be linked to bilaterally impaired dexterity and processing speed, implicating global connectivity dysfunction affecting sensorimotor cortex independent of lesion location. Findings not only illustrate global network disruption after minor stroke, but suggest rolandic beta band activity may be a potential biomarker and treatment target, evenmore »for minor stroke patients with small lesions far from sensorimotor areas.« less
    Free, publicly-accessible full text available March 28, 2023
  4. Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the different predictivemore »models in nonidentical parts of the superior temporal lobes bilaterally, with the right hemisphere showing a relative preference for more local models. These results suggest that speech processing recruits both local and unified predictive models in parallel, reconciling previous disparate findings. Parallel models might make the perceptual system more robust, facilitate processing of unexpected inputs, and serve a function in language acquisition.« less
    Free, publicly-accessible full text available January 21, 2023
  5. Identifying the directed connectivity that underlie networked activity between different cortical areas is critical for understanding the neural mechanisms behind sensory processing. Granger causality (GC) is widely used for this purpose in functional magnetic resonance imaging analysis, but there the temporal resolution is low, making it difficult to capture the millisecond-scale interactions underlying sensory processing. Magne- toencephalography (MEG) has millisecond resolution, but only provides low-dimensional sensor-level linear mixtures of neural sources, which makes GC inference challenging. Conventional methods proceed in two stages: First, cortical sources are estimated from MEG using a source localization technique, followed by GC inference among the estimated sources. However, the spatiotemporal biases in estimating sources propagate into the subsequent GC analysis stage, may result in both false alarms and missing true GC links. Here, we introduce the Network Localized Granger Causality (NLGC) inference paradigm, which models the source dynamics as latent sparse multivariate autoregressive processes and estimates their parameters directly from the MEG measurements, integrated with source localization, and employs the resulting parameter estimates to produce a precise statistical characterization of the detected GC links. We offer several theoretical and algorithmic innovations within NLGC and further examine its utility via comprehensive simulations and application to MEGmore »data from an auditory task involving tone processing from both younger and older participants. Our simulation studies reveal that NLGC is markedly robust with respect to model mismatch, network size, and low signal-to-noise ratio, whereas the conventional two-stage methods result in high false alarms and mis-detections. We also demonstrate the advantages of NLGC in revealing the cortical network- level characterization of neural activity during tone processing and resting state by delineating task- and age-related connectivity changes.« less
    Free, publicly-accessible full text available January 1, 2023
  6. Abstract
    Stroke patients with hemiparesis display decreased beta band (13–25Hz) rolandic activity, correlating to impaired motor function. However, clinically, patients without significant weakness, with small lesions far from sensorimotor cortex, exhibit bilateral decreased motor dexterity and slowed reaction times. We investigate whether these minor stroke patients also display abnormal beta band activity. Magnetoencephalographic (MEG) data were collected from nine minor stroke patients (NIHSS &lt; 4) without significant hemiparesis, at ~1 and ~6 months postinfarct, and eight age-similar controls. Rolandic relative beta power during matching tasks and resting state, and Beta Event Related (De)Synchronization (ERD/ERS) during button press responses were analyzed. Regardless of lesion location, patients had significantly reduced relative beta power and ERS compared to controls. abnormalities persisted over visits, and were present in both ipsi- and contra-lesional hemispheres, consistent with bilateral impairments in motor dexterity and speed. Minor stroke patients without severe weakness display reduced rolandic beta band activity in both hemispheres, which may be linked to bilaterally impaired dexterity and processing speed, implicating global connectivity dysfunction affecting sensorimotor cortex independent of lesion location. Findings not only illustrate global network disruption after minor stroke, but suggest rolandic beta band activity may be a potential biomarker and treatment target, evenMore>>
  7. Abstract
    Speech processing is highly incremental. It is widely accepted that human listeners continuously use the linguistic context to anticipate upcoming concepts, words, and phonemes. However, previous evidence supports two seemingly contradictory models of how a predictive context is integrated with the bottom-up sensory input: Classic psycholinguistic paradigms suggest a two-stage process, in which acoustic input initially leads to local, context-independent representations, which are then quickly integrated with contextual constraints. This contrasts with the view that the brain constructs a single coherent, unified interpretation of the input, which fully integrates available information across representational hierarchies, and thus uses contextual constraints to modulate even the earliest sensory representations. To distinguish these hypotheses, we tested magnetoencephalography responses to continuous narrative speech for signatures of local and unified predictive models. Results provide evidence that listeners employ both types of models in parallel. Two local context models uniquely predict some part of early neural responses, one based on sublexical phoneme sequences, and one based on the phonemes in the current word alone; at the same time, even early responses to phonemes also reflect a unified model that incorporates sentence-level constraints to predict upcoming phonemes. Neural source localization places the anatomical origins of the differentMore>>