Temporal proximity is an important clue for multisensory integration. Previous evidence indicates that individuals with autism and schizophrenia are more likely to integrate multisensory inputs over a longer temporal binding window (TBW). However, whether such deficits in audiovisual temporal integration extend to subclinical populations with high schizotypal and autistic traits are unclear. Using audiovisual simultaneity judgment (SJ) tasks for nonspeech and speech stimuli, our results suggested that the width of the audiovisual TBW was not significantly correlated with self‐reported schizotypal and autistic traits in a group of young adults. Functional magnetic resonance imaging (fMRI) resting‐state activity was also acquired to explore the neural correlates underlying inter‐individual variability of TBW width. Across the entire sample, stronger resting‐state functional connectivity (rsFC) between the left superior temporal cortex and the left precuneus, and weaker rsFC between the left cerebellum and the right dorsal lateral prefrontal cortex were correlated with a narrower TBW for speech stimuli. Meanwhile, stronger rsFC between the left anterior superior temporal gyrus and the right inferior temporal gyrus was correlated with a wider audiovisual TBW for non‐speech stimuli. The TBW‐related rsFC was not affected by levels of subclinical traits. In conclusion, this study indicates that audiovisual temporal processing may not be affected by autistic and schizotypal traits and rsFC between brain regions responding to multisensory information and timing may account for the inter‐individual difference in TBW width.
Individuals with ASD and schizophrenia are more likely to perceive asynchronous auditory and visual events as occurring simultaneously even if they are well separated in time. We investigated whether similar difficulties in audiovisual temporal processing were present in subclinical populations with high autistic and schizotypal traits. We found that the ability to detect audiovisual asynchrony was not affected by different levels of autistic and schizotypal traits. We also found that connectivity of some brain regions engaging in multisensory and timing tasks might explain an individual's tendency to bind multisensory information within a wide or narrow time window.
- PAR ID:
- 10452354
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Autism Research
- Volume:
- 14
- Issue:
- 4
- ISSN:
- 1939-3792
- Page Range / eLocation ID:
- p. 668-680
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Studying electroencephalography (EEG) in response to transcranial magnetic stimulation (TMS) is gaining popularity for investigating the dynamics of complex neural architecture in the brain. For example, the primary motor cortex (M1) executes voluntary movements by complex connections with other associated subnetworks. To understand these connections better, we analyzed EEG signal response to TMS at left M1 from schizophrenia patients and healthy controls and in contrast with resting state EEG recording. After removing artifacts from EEG, we conducted 2D to 3D sLORETA conversion, a well-established source localization method, for estimating signal strength of 68 source dipoles or cortical regions inside the brain. Next, we studied dynamic connectivity by computing time-evolving spatial coherence of 2278 (=68*(68-1)/2) pairs of cortical regions, with sliding window technique of 200ms window size and 20ms shift over 1sec long data. Pairs with consistent coherence (coherence>0.8 during 200+ sliding windows of patients and controls combined) were chosen for identifying stable networks. For example, we found that during the resting state, precuneus was steadily coherent with middle and superior temporal gyrus in the left hemisphere in both patient and controls. Their connectivity pattern over the sliding windows significantly differed between patients and controls (pvalue<0.05). Whereas for M1, the same was true for two other coherent pairs namely, superamarginal gyrus with lateral occipital gyrus in right hemisphere and medial orbitofrontal gyrus with fusiform in left hemisphere. The TMS-EEG dynamic connectivity results can help to differentiate patient and normal subjects and also help to better understand the brain architecture and mechanisms.more » « less
-
Linking cognitive and neural models of audiovisual processing to explore speech perception in autismAutistic and neurotypical children do not handle audiovisual speech in the same manner. Current evidence suggests that this difference occurs at the level of cue combination. Here, we test whether differences in autistic and neurotypical audiovisual speech perception can be explained by a neural theory of sensory perception in autism, which proposes that heightened levels of neural excitation can account for sensory differences in autism. Through a linking hypothesis that integrates a standard probabilistic cognitive model of cue integration with representations of neural activity, we derive a model that can simulate audio-visual speech perception at a neural population level. Simulations of an audiovisual lexical identification task demonstrate that heightened levels of neural excitation at the level of cue combination cannot account for the observed differences in autistic and neurotypical children's audiovisual speech perception.more » « less
-
Abstract The brain is organized such that it encodes and maintains category information about thousands of objects. However, how learning shapes these neural representations of object categories is unknown. The present study focuses on faces, examining whether: (1) Enhanced categorical discrimination or (2) Feature analysis enhances face/non‐face categorization in the brain. Stimuli ranged from non‐faces to faces with two‐toned Mooney images used for testing and gray‐scale images used for training. The stimulus set was specifically chosen because it has a true categorical boundary between faces and non‐faces but the stimuli surrounding that boundary have very similar features, making the boundary harder to learn. Brain responses were measured using functional magnetic resonance imaging while participants categorized the stimuli before and after training. Participants were either trained with a categorization task, or with non‐categorical semblance analyzation. Interestingly, when participants were categorically trained, the neural activity pattern in the left fusiform gyrus shifted from a graded representation of the stimuli to a categorical representation. This corresponded with categorical face/non‐face discrimination, critically including both an increase in selectivity to faces and a decrease in false alarm response to non‐faces. By contrast, while activity pattern in the right fusiform cortex correlated with face/non‐face categorization prior to training, it was not affected by learning. Our results reveal the key role of the left fusiform cortex in learning face categorization. Given the known right hemisphere dominance for face‐selective responses, our results suggest a rethink of the relationship between the two hemispheres in face/non‐face categorization.
Hum Brain Mapp 38:3648–3658, 2017 . ©2017 Wiley Periodicals, Inc. -
Vocal imitation in English-speaking autistic individuals has been shown to be atypical. Speaking a tone language such as Mandarin facilitates vocal imitation skills among non-autistic individuals, yet no studies have examined whether this effect holds for autistic individuals. To address this question, we compared vocal imitation of speech and song between 33 autistic Mandarin speakers and 30 age-matched non-autistic peers. Participants were recorded while imitating 40 speech and song stimuli with varying pitch and duration patterns. Acoustic analyses showed that autistic participants imitated relative pitch (but not absolute pitch) less accurately than non-autistic participants for speech, whereas for song the two groups performed comparably on both absolute and relative pitch matching. Regarding duration matching, autistic participants imitated relative duration (inter-onset interval between consecutive notes/syllables) less accurately than non-autistic individuals for both speech and song, while their lower performance on absolute duration matching of the notes/syllables was presented only in the song condition. These findings indicate that experience with tone languages does not mitigate the challenges autistic individuals face in imitating speech and song, highlighting the importance of considering the domains and features of investigation and individual differences in cognitive abilities and language backgrounds when examining imitation in autism.
Lay abstract Atypical vocal imitation has been identified in English-speaking autistic individuals, whereas the characteristics of vocal imitation in tone-language-speaking autistic individuals remain unexplored. By comparing speech and song imitation, the present study reveals a unique pattern of atypical vocal imitation across speech and music domains among Mandarin-speaking autistic individuals. The findings suggest that tone language experience does not compensate for difficulties in vocal imitation in autistic individuals and extends our understanding of vocal imitation in autism across different languages.
-
null (Ed.)Abstract Neurobiological models of speech perception posit that both left and right posterior temporal brain regions are involved in the early auditory analysis of speech sounds. However, frank deficits in speech perception are not readily observed in individuals with right hemisphere damage. Instead, damage to the right hemisphere is often associated with impairments in vocal identity processing. Herein lies an apparent paradox: The mapping between acoustics and speech sound categories can vary substantially across talkers, so why might right hemisphere damage selectively impair vocal identity processing without obvious effects on speech perception? In this review, I attempt to clarify the role of the right hemisphere in speech perception through a careful consideration of its role in processing vocal identity. I review evidence showing that right posterior superior temporal, right anterior superior temporal, and right inferior / middle frontal regions all play distinct roles in vocal identity processing. In considering the implications of these findings for neurobiological accounts of speech perception, I argue that the recruitment of right posterior superior temporal cortex during speech perception may specifically reflect the process of conditioning phonetic identity on talker information. I suggest that the relative lack of involvement of other right hemisphere regions in speech perception may be because speech perception does not necessarily place a high burden on talker processing systems, and I argue that the extant literature hints at potential subclinical impairments in the speech perception abilities of individuals with right hemisphere damage.more » « less