skip to main content


Title: Modeling Rhythm in Speech as in Music: Towards a Unified Cognitive Representation
Rhythm plays an important role in language perception and learning, with infants perceiving rhythmic differences across languages at birth. While the mechanisms underlying rhythm perception in speech remain unclear, one interesting possibility is that these mechanisms are similar to those involved in the perception of musical rhythm. In this work, we adopt a model originally designed for musical rhythm to simulate speech rhythm perception. We show that this model replicates the behavioral results of language discrimination in newborns, and outperforms an existing model of infant language discrimination. We also find that percussives — fast-changing components in the acoustics — are necessary for distinguishing languages of different rhythms, which suggests that percussives are essential for rhythm perception. Our music-inspired model of speech rhythm may be seen as a first step towards a unified theory of how rhythm is represented in speech and music.  more » « less
Award ID(s):
1734245 2120834
NSF-PAR ID:
10333232
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Conference on Cognitive Computational Neuroscience
Page Range / eLocation ID:
324-327
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Uncovering the genetic underpinnings of musical ability and engagement is a foundational step for exploring their wide‐ranging associations with cognition, health, and neurodevelopment. Prior studies have focused on using twin and family designs, demonstrating moderate heritability of musical phenotypes. The current study used genome‐wide complex trait analysis and polygenic score (PGS) approaches utilizing genotype data to examine genetic influences on two musicality traits (rhythmic perception and music engagement) inN= 1792 unrelated adults in the Vanderbilt Online Musicality Study. Meta‐analyzed heritability estimates (including a replication sample of Swedish individuals) were 31% for rhythmic perception and 12% for self‐reported music engagement. A PGS derived from a recent study on beat synchronization ability predicted both rhythmic perception (β= 0.11) and music engagement (β= 0.19) in our sample, suggesting that genetic influences underlying self‐reported beat synchronization ability also influence individuals’ rhythmic discrimination aptitude and the degree to which they engage in music. Cross‐trait analyses revealed a modest contribution of PGSs from several nonmusical traits (from the cognitive, personality, and circadian chronotype domains) to individual differences in musicality (β= −0.06 to 0.07). This work sheds light on the complex relationship between the genetic architecture of musical rhythm processing, beat synchronization, music engagement, and other nonmusical traits.

     
    more » « less
  2. Prosody perception is fundamental to spoken language communication as it supports comprehension, pragmatics, morphosyntactic parsing of speech streams, and phonological awareness. A particular aspect of prosody: perceptual sensitivity to speech rhythm patterns in words (i.e., lexical stress sensitivity), is also a robust predictor of reading skills, though it has received much less attention than phonological awareness in the literature. Given the importance of prosody and reading in educational outcomes, reliable and valid tools are needed to conduct large-scale health and genetic investigations of individual differences in prosody, as groundwork for investigating the biological underpinnings of the relationship between prosody and reading. Motivated by this need, we present the Test of Prosody via Syllable Emphasis (“TOPsy”) and highlight its merits as a phenotyping tool to measure lexical stress sensitivity in as little as 10 min, in scalable internet-based cohorts. In this 28-item speech rhythm perception test [modeled after the stress identification test from Wade-Woolley (2016) ], participants listen to multi-syllabic spoken words and are asked to identify lexical stress patterns. Psychometric analyses in a large internet-based sample shows excellent reliability, and predictive validity for self-reported difficulties with speech-language, reading, and musical beat synchronization. Further, items loaded onto two distinct factors corresponding to initially stressed vs. non-initially stressed words. These results are consistent with previous reports that speech rhythm perception abilities correlate with musical rhythm sensitivity and speech-language/reading skills, and are implicated in reading disorders (e.g., dyslexia). We conclude that TOPsy can serve as a useful tool for studying prosodic perception at large scales in a variety of different settings, and importantly can act as a validated brief phenotype for future investigations of the genetic architecture of prosodic perception, and its relationship to educational outcomes. 
    more » « less
  3. McPherson, Laura ; Winter, Yoad (Ed.)
    Text-setting patterns in music have served as a key data source in the development of theories of prosody and rhythm in stress-based languages, but have been explored less from a rhythmic perspective in the realm of tone languages. African tone languages have been especially under-studied in terms of rhythmic patterns in text-setting, likely in large part due to the ill-understood status of metrical structure and prosodic prominence asymmetries in many of these languages. Here, we explore how language is mapped to rhythmic structure in traditional folksongs sung in Medʉmba, a Grassfields Bantu language spoken in Cameroon. We show that, despite complex and varying rhythmic structures within and across songs, correspondences emerge between musical rhythm and linguistic structure at the level of stem position, tone, and prosodic structure. Our results reinforce the notion that metrical prominence asymmetries are present in African tone languages, and that they play an important coordinative role in music and movement. 
    more » « less
  4. Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse—quantified here as the phase-locking value (PLV)—after normalizing the PLVs to each musical recording’s detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging. 
    more » « less
  5. Abstract

    Previous work suggests that auditory–vestibular interactions, which emerge during bodily movement to music, can influence the perception of musical rhythm. In a seminal study on the ontogeny of musical rhythm, Phillips‐Silver and Trainor (2005) found that bouncing infants to an unaccented rhythm influenced infants’ perceptual preferences for accented rhythms that matched the rate of bouncing. In the current study, we ask whether nascent, diffuse coupling between auditory and motor systems is sufficient to bootstrap short‐term Hebbian plasticity in the auditory system and explain infants’ preferences for accented rhythms thought to arise from auditory–vestibular interactions. First, we specify a nonlinear, dynamical system in which two oscillatory neural networks, representing developmentally nascent auditory and motor systems, interact through weak, non‐specific coupling. The auditory network was equipped with short‐term Hebbian plasticity, allowing the auditory network to tune its intrinsic resonant properties. Next, we simulate the effect of vestibular input (e.g., infant bouncing) on infants’ perceptual preferences for accented rhythms. We found that simultaneous auditory–vestibular training shaped the model's response to musical rhythm, enhancing vestibular‐related frequencies in auditory‐network activity. Moreover, simultaneous auditory–vestibular training, relative to auditory‐ or vestibular‐only training, facilitated short‐term auditory plasticity in the model, producing stronger oscillator connections in the auditory network. Finally, when tested on a musical rhythm, models which received simultaneous auditory–vestibular training, but not models that received auditory‐ or vestibular‐only training, resonated strongly at frequencies related to their “bouncing,” a finding qualitatively similar to infants’ preferences for accented rhythms that matched the rate of infant bouncing.

     
    more » « less