Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
- NSF-PAR ID:
- 10366082
- Date Published:
- Journal Name:
- Frontiers in Neuroscience
- Volume:
- 16
- ISSN:
- 1662-453X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
While a range of measures based on speech production, language, and perception are possible (Manun et al., 2020) for the prediction and estimation of speech intelligibility, what constitutes second language (L2) intelligibility remains under-defined. Prosodic and temporal features (i.e., stress, speech rate, rhythm, and pause placement) have been shown to impact listener perception (Kang et al., 2020). Still, their relationship with highly intelligible speech is yet unclear. This study aimed to characterize L2 speech intelligibility. Acoustic analyses, including PRAAT and Python scripts, were conducted on 405 speech samples (30 s) from 102 L2 English speakers with a wide variety of backgrounds, proficiency levels, and intelligibility levels. The results indicate that highly intelligible speakers of English employ between 2 and 4 syllables per second and that higher or lower speeds are less intelligible. Silent pauses between 0.3 and 0.8 s were associated with the highest levels of intelligibility. Rhythm, measured by Δ syllable length of all content syllables, was marginally associated with intelligibility. Finally, lexical stress accuracy did not interfere substantially with intelligibility until less than 70% of the polysyllabic words were incorrect. These findings inform the fields of first and second language research as well as language education and pathology.
-
McPherson, Laura ; Winter, Yoad (Ed.)Text-setting patterns in music have served as a key data source in the development of theories of prosody and rhythm in stress-based languages, but have been explored less from a rhythmic perspective in the realm of tone languages. African tone languages have been especially under-studied in terms of rhythmic patterns in text-setting, likely in large part due to the ill-understood status of metrical structure and prosodic prominence asymmetries in many of these languages. Here, we explore how language is mapped to rhythmic structure in traditional folksongs sung in Medʉmba, a Grassfields Bantu language spoken in Cameroon. We show that, despite complex and varying rhythmic structures within and across songs, correspondences emerge between musical rhythm and linguistic structure at the level of stem position, tone, and prosodic structure. Our results reinforce the notion that metrical prominence asymmetries are present in African tone languages, and that they play an important coordinative role in music and movement.more » « less
-
The Mayan language Mam uses complex predicates to express events. Complex predicates map multiple semantic elements onto a single word, and consequently have a blend of lexical and phrasal features. The chameleon-like nature of complex predicates provides a window on children’s ability to express phrasal combinations at the one-word stage of language development. The ubiquity of complex predicates in the adult language insures that children will produce complex predicates as some of their first words. The verb complex in Mam has obligatory inflections for aspect, person, and to a degree direction. The inflections vary in degree of attachment between syllable segments, affixes, and clitics. Inflections with vowels are phonologically free, while inflections without vowels attach as either syllable segments or affixes. The Mam verb complex requires the addition of a phrasal layer to prosodic models of lexical acquisition. The paper used this extended version of prosodic theory to make five predictions for the acquisition of the verb complex. The paper analyzes production data for three children between 2;0 and 2;8 acquiring the northern variety of Mam spoken in San Ildefonso Ixtahuacán, Guatemala. The children’s production data for both the intransitive and transitive verb complexes support all five predictions to some degree. The children produced prefixes more frequently on vowel-initial stems than on consonant-initial stems, and they produced imperative suffixes more frequently than prefixes on consonant-initial stems. The children exhibited developmental differences and produced phrasal contractions that the prosodic theory did not predict. The results underline the need to integrate prosody into models of morphosyntactic development, and highlight the significance of complex predicates for theories of language acquisition.
-
Differences in prosody (e.g., intonation, rhythm) are among the most obvious language‐related impairments in autism spectrum disorder (ASD), and significantly impact communication. Subtle prosodic differences have also been identified in a subset of clinically unaffected first‐degree relatives of individuals with ASD, and may reflect genetic liability to ASD. This study investigated the neural basis of prosodic differences in ASD and first‐degree relatives through analysis of feedforward and feedback control involved in the planning, production, self‐monitoring, and self‐correction of speech by using a pitch‐perturbed auditory feedback paradigm during sustained vowel and speech production. Results revealed larger vocal response magnitudes to pitch‐perturbed auditory feedback across tasks in ASD and ASD parent groups, with differences in sustained vowel production driven by parents who displayed subclinical personality and language features associated with ASD (i.e., broad autism phenotype). Both ASD and ASD parent groups exhibited increased response onset latencies during sustained vowel production, while the ASD parent group exhibited decreased response onset latencies during speech production. Vocal response magnitudes across tasks were associated with prosodic atypicalities in both individuals with ASD and their parents. Exploratory event‐related potential (ERP) analyses in a subgroup of participants during the sustained vowel task revealed reduced P1 ERP amplitudes in the ASD group, with similar trends observed in parents. Overall, results suggest underdeveloped feedforward systems and neural attenuation in detecting audio‐vocal feedback may contribute to ASD‐related prosodic atypicalities. Importantly, results implicate atypical audio‐vocal integration as a marker of genetic risk to ASD, evident in ASD and among clinically unaffected relatives.
. © 2019 The Authors.Autism Res 2019, 12: 1192–1210Autism Research published by International Society for Autism Research published by Wiley Periodicals, Inc.Lay Summary Previous research has identified atypicalities in prosody (e.g., intonation) in individuals with ASD and a subset of their first‐degree relatives. In order to better understand the mechanisms underlying prosodic differences in ASD, this study examined how individuals with ASD and their parents responded to unexpected differences in what they heard themselves say to modify control of their voice (i.e., audio‐vocal integration). Results suggest that disruptions to audio‐vocal integration in individuals with ASD contribute to ASD‐related prosodic atypicalities, and the more subtle differences observed in parents could reflect underlying genetic liability to ASD.