skip to main content


Title: Mechanisms of voice control related to prosody in autism spectrum disorder and first‐degree relatives
Lay Summary

Previous research has identified atypicalities in prosody (e.g., intonation) in individuals with ASD and a subset of their first‐degree relatives. In order to better understand the mechanisms underlying prosodic differences in ASD, this study examined how individuals with ASD and their parents responded to unexpected differences in what they heard themselves say to modify control of their voice (i.e., audio‐vocal integration). Results suggest that disruptions to audio‐vocal integration in individuals with ASD contribute to ASD‐related prosodic atypicalities, and the more subtle differences observed in parents could reflect underlying genetic liability to ASD.

 
more » « less
NSF-PAR ID:
10460203
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Autism Research
Volume:
12
Issue:
8
ISSN:
1939-3792
Page Range / eLocation ID:
p. 1192-1210
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Entrainment, the unconscious process leading to coordination between communication partners, is an important dynamic human behavior that helps us connect with one another. Difficulty developing and sustaining social connections is a hallmark of autism spectrum disorder (ASD). Subtle differences in social behaviors have also been noted in first-degree relatives of autistic individuals and may express underlying genetic liability to ASD. In-depth examination of verbal entrainment was conducted to examine disruptions to entrainment as a contributing factor to the language phenotype in ASD. Results revealed distinct patterns of prosodic and lexical entrainment in individuals with ASD. Notably, subtler entrainment differences in prosodic and syntactic entrainment were identified in parents of autistic individuals. Findings point towards entrainment, particularly prosodic entrainment, as a key process linked to social communication difficulties in ASD and reflective of genetic liability to ASD.

     
    more » « less
  2. Abstract Lay summary

    Individuals with ASD and schizophrenia are more likely to perceive asynchronous auditory and visual events as occurring simultaneously even if they are well separated in time. We investigated whether similar difficulties in audiovisual temporal processing were present in subclinical populations with high autistic and schizotypal traits. We found that the ability to detect audiovisual asynchrony was not affected by different levels of autistic and schizotypal traits. We also found that connectivity of some brain regions engaging in multisensory and timing tasks might explain an individual's tendency to bind multisensory information within a wide or narrow time window.Autism Res2021, 14: 668–680. © 2020 International Society for Autism Research and Wiley Periodicals LLC

     
    more » « less
  3. Lay Summary

    Minimally and low verbal children and adolescents with autism (ASD‐MLV) displayed more atypical auditory behaviors (e.g., ear covering and humming) than verbally fluent participants with ASD. In ASD‐MLV participants, time spent exhibiting such behaviors was associated with receptive vocabulary deficits and weaker neural responses to changes in sound loudness. Findings suggest that individuals with ASD with both severe expressive and receptive language impairments process sounds differently.Autism Res2020, 13: 1718–1729. © 2020 International Society for Autism Research and Wiley Periodicals LLC

     
    more » « less
  4. Abstract Lay Summary

    Individuals with autism spectrum disorder (ASD) often exhibit atypical imitation of actions and gestures. Characteristics of vocal imitation in ASD remain unclear. By comparing speech and song imitation, this study shows that individuals with ASD have a vocal imitative deficit that is specific to absolute pitch and duration matching, while performing as well as controls on relative pitch and duration matching, across speech and music domains.

     
    more » « less
  5. Purpose

    To develop and evaluate a technique for 3D dynamic MRI of the full vocal tract at high temporal resolution during natural speech.

    Methods

    We demonstrate 2.4 × 2.4 × 5.8 mm3spatial resolution, 61‐ms temporal resolution, and a 200 × 200 × 70 mm3FOV. The proposed method uses 3D gradient‐echo imaging with a custom upper‐airway coil, a minimum‐phase slab excitation, stack‐of‐spirals readout, pseudo golden‐angle view order inkxky, linear Cartesian order alongkz, and spatiotemporal finite difference constrained reconstruction, with 13‐fold acceleration. This technique is evaluated using in vivo vocal tract airway data from 2 healthy subjects acquired at 1.5T scanner, 1 with synchronized audio, with 2 tasks during production of natural speech, and via comparison with interleaved multislice 2D dynamic MRI.

    Results

    This technique captured known dynamics of vocal tract articulators during natural speech tasks including tongue gestures during the production of consonants “s” and “l” and of consonant–vowel syllables, and was additionally consistent with 2D dynamic MRI. Coordination of lingual (tongue) movements for consonants is demonstrated via volume‐of‐interest analysis. Vocal tract area function dynamics revealed critical lingual constriction events along the length of the vocal tract for consonants and vowels.

    Conclusion

    We demonstrate feasibility of 3D dynamic MRI of the full vocal tract, with spatiotemporal resolution adequate to visualize lingual movements for consonants and vocal tact shaping during natural productions of consonant–vowel syllables, without requiring multiple repetitions.

     
    more » « less