Entrainment, the unconscious process leading to coordination between communication partners, is an important dynamic human behavior that helps us connect with one another. Difficulty developing and sustaining social connections is a hallmark of autism spectrum disorder (ASD). Subtle differences in social behaviors have also been noted in first-degree relatives of autistic individuals and may express underlying genetic liability to ASD. In-depth examination of verbal entrainment was conducted to examine disruptions to entrainment as a contributing factor to the language phenotype in ASD. Results revealed distinct patterns of prosodic and lexical entrainment in individuals with ASD. Notably, subtler entrainment differences in prosodic and syntactic entrainment were identified in parents of autistic individuals. Findings point towards entrainment, particularly prosodic entrainment, as a key process linked to social communication difficulties in ASD and reflective of genetic liability to ASD.
Differences in prosody (e.g., intonation, rhythm) are among the most obvious language‐related impairments in autism spectrum disorder (ASD), and significantly impact communication. Subtle prosodic differences have also been identified in a subset of clinically unaffected first‐degree relatives of individuals with ASD, and may reflect genetic liability to ASD. This study investigated the neural basis of prosodic differences in ASD and first‐degree relatives through analysis of feedforward and feedback control involved in the planning, production, self‐monitoring, and self‐correction of speech by using a pitch‐perturbed auditory feedback paradigm during sustained vowel and speech production. Results revealed larger vocal response magnitudes to pitch‐perturbed auditory feedback across tasks in ASD and ASD parent groups, with differences in sustained vowel production driven by parents who displayed subclinical personality and language features associated with ASD (i.e., broad autism phenotype). Both ASD and ASD parent groups exhibited increased response onset latencies during sustained vowel production, while the ASD parent group exhibited decreased response onset latencies during speech production. Vocal response magnitudes across tasks were associated with prosodic atypicalities in both individuals with ASD and their parents. Exploratory event‐related potential (ERP) analyses in a subgroup of participants during the sustained vowel task revealed reduced P1 ERP amplitudes in the ASD group, with similar trends observed in parents. Overall, results suggest underdeveloped feedforward systems and neural attenuation in detecting audio‐vocal feedback may contribute to ASD‐related prosodic atypicalities. Importantly, results implicate atypical audio‐vocal integration as a marker of genetic risk to ASD, evident in ASD and among clinically unaffected relatives.
Previous research has identified atypicalities in prosody (e.g., intonation) in individuals with ASD and a subset of their first‐degree relatives. In order to better understand the mechanisms underlying prosodic differences in ASD, this study examined how individuals with ASD and their parents responded to unexpected differences in what they heard themselves say to modify control of their voice (i.e., audio‐vocal integration). Results suggest that disruptions to audio‐vocal integration in individuals with ASD contribute to ASD‐related prosodic atypicalities, and the more subtle differences observed in parents could reflect underlying genetic liability to ASD.
- NSF-PAR ID:
- 10460203
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Autism Research
- Volume:
- 12
- Issue:
- 8
- ISSN:
- 1939-3792
- Page Range / eLocation ID:
- p. 1192-1210
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Temporal proximity is an important clue for multisensory integration. Previous evidence indicates that individuals with autism and schizophrenia are more likely to integrate multisensory inputs over a longer temporal binding window (TBW). However, whether such deficits in audiovisual temporal integration extend to subclinical populations with high schizotypal and autistic traits are unclear. Using audiovisual simultaneity judgment (SJ) tasks for nonspeech and speech stimuli, our results suggested that the width of the audiovisual TBW was not significantly correlated with self‐reported schizotypal and autistic traits in a group of young adults. Functional magnetic resonance imaging (fMRI) resting‐state activity was also acquired to explore the neural correlates underlying inter‐individual variability of TBW width. Across the entire sample, stronger resting‐state functional connectivity (rsFC) between the left superior temporal cortex and the left precuneus, and weaker rsFC between the left cerebellum and the right dorsal lateral prefrontal cortex were correlated with a narrower TBW for speech stimuli. Meanwhile, stronger rsFC between the left anterior superior temporal gyrus and the right inferior temporal gyrus was correlated with a wider audiovisual TBW for non‐speech stimuli. The TBW‐related rsFC was not affected by levels of subclinical traits. In conclusion, this study indicates that audiovisual temporal processing may not be affected by autistic and schizotypal traits and rsFC between brain regions responding to multisensory information and timing may account for the inter‐individual difference in TBW width.
Lay summary Individuals with ASD and schizophrenia are more likely to perceive asynchronous auditory and visual events as occurring simultaneously even if they are well separated in time. We investigated whether similar difficulties in audiovisual temporal processing were present in subclinical populations with high autistic and schizotypal traits. We found that the ability to detect audiovisual asynchrony was not affected by different levels of autistic and schizotypal traits. We also found that connectivity of some brain regions engaging in multisensory and timing tasks might explain an individual's tendency to bind multisensory information within a wide or narrow time window.
. © 2020 International Society for Autism Research and Wiley Periodicals LLCAutism Res 2021, 14: 668–680 -
The common display of atypical behavioral responses to sounds by individuals with autism (ASD) suggests that they process sounds differently. Within ASD, individuals who are minimally or low verbal (ASD‐MLV) are suspected to have greater auditory processing impairments. However, it is unknown whether atypical auditory behaviors are related to receptive language and/or neural processing of sounds in ASD‐MLV. In Experiment 1, we compared the percentage of time 47 ASD‐MLV and 36 verbally fluent (ASD‐V) participants, aged 5–21, displayed atypical auditory or visual sensory behaviors during the administration of the Autism Diagnostic Observation Schedule (ADOS). In Experiment 2, we tested whether atypical auditory behaviors were more frequent in ASD‐MLV participants with receptive language deficits. In Experiment 3, we tested whether atypical auditory behaviors correlated with neural indices of sensitivity to perceptual sound differences as measured by the amplitude of neural responses to nonspeech intensity deviants. We found that ASD‐MLV participants engaged in atypical auditory behaviors more often than ASD‐V participants; in contrast, the incidence of atypical visual behaviors did not differ between the groups. Lower receptive language skills in the ASD‐MLV group were predicted by greater incidence of atypical auditory behaviors. Exploratory analyses revealed a significant negative correlation between the amount of atypical auditory behaviors and the amplitude of neural response to deviants. Future work is needed to elucidate whether the relationship between atypical auditory behaviors and receptive language impairments in ASD‐MLV individuals results from disruptions in the brain mechanisms involved in auditory processing.
Lay Summary Minimally and low verbal children and adolescents with autism (ASD‐MLV) displayed more atypical auditory behaviors (e.g., ear covering and humming) than verbally fluent participants with ASD. In ASD‐MLV participants, time spent exhibiting such behaviors was associated with receptive vocabulary deficits and weaker neural responses to changes in sound loudness. Findings suggest that individuals with ASD with both severe expressive and receptive language impairments process sounds differently.
Autism Res 2020, 13: 1718–1729. © 2020 International Society for Autism Research and Wiley Periodicals LLC -
Abstract Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation. However, few studies have identified clear quantitative characteristics of vocal imitation in ASD. This study investigated imitation of speech and song in English‐speaking individuals with and without ASD and its modulation by age. Participants consisted of 25 autistic children and 19 autistic adults, who were compared to 25 children and 19 adults with typical development matched on age, gender, musical training, and cognitive abilities. The task required participants to imitate speech and song stimuli with varying pitch and duration patterns. Acoustic analyses of the imitation performance suggested that individuals with ASD were worse than controls on absolute pitch and duration matching for both speech and song imitation, although they performed as well as controls on relative pitch and duration matching. Furthermore, the two groups produced similar numbers of pitch contour, pitch interval‐, and time errors. Across both groups, sung pitch was imitated more accurately than spoken pitch, whereas spoken duration was imitated more accurately than sung duration. Children imitated spoken pitch more accurately than adults when it came to speech stimuli, whereas age showed no significant relationship to song imitation. These results reveal a vocal imitation deficit across speech and music domains in ASD that is specific to absolute pitch and duration matching. This finding provides evidence for shared mechanisms between speech and song imitation, which involves independent implementation of relative versus absolute features.
Lay Summary Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.
-
Purpose To develop and evaluate a technique for 3D dynamic MRI of the full vocal tract at high temporal resolution during natural speech.
Methods We demonstrate 2.4 × 2.4 × 5.8 mm3spatial resolution, 61‐ms temporal resolution, and a 200 × 200 × 70 mm3FOV. The proposed method uses 3D gradient‐echo imaging with a custom upper‐airway coil, a minimum‐phase slab excitation, stack‐of‐spirals readout, pseudo golden‐angle view order in
kx ‐ky , linear Cartesian order alongkz , and spatiotemporal finite difference constrained reconstruction, with 13‐fold acceleration. This technique is evaluated using in vivo vocal tract airway data from 2 healthy subjects acquired at 1.5T scanner, 1 with synchronized audio, with 2 tasks during production of natural speech, and via comparison with interleaved multislice 2D dynamic MRI.Results This technique captured known dynamics of vocal tract articulators during natural speech tasks including tongue gestures during the production of consonants “s” and “l” and of consonant–vowel syllables, and was additionally consistent with 2D dynamic MRI. Coordination of lingual (tongue) movements for consonants is demonstrated via volume‐of‐interest analysis. Vocal tract area function dynamics revealed critical lingual constriction events along the length of the vocal tract for consonants and vowels.
Conclusion We demonstrate feasibility of 3D dynamic MRI of the full vocal tract, with spatiotemporal resolution adequate to visualize lingual movements for consonants and vocal tact shaping during natural productions of consonant–vowel syllables, without requiring multiple repetitions.