skip to main content

Title: The Music of Silence: Part I: Responses to Musical Imagery Encode Melodic Expectations and Acoustics
Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by more » the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery. « less
Authors:
; ;
Award ID(s):
1824198
Publication Date:
NSF-PAR ID:
10309542
Journal Name:
The journal of neuroscience
Volume:
41
Issue:
35
ISSN:
0270-6474
Sponsoring Org:
National Science Foundation
More Like this
  1. During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here,more »we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.« less
  2. Humans engagement in music rests on underlying elements such as the listeners’ cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure basedmore »on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl’s gyrus.« less
  3. Many people listen to music for hours every day, often near bedtime. We investigated whether music listening affects sleep, focusing on a rarely explored mechanism: involuntary musical imagery (earworms). In Study 1 ( N = 199, mean age = 35.9 years), individuals who frequently listen to music reported persistent nighttime earworms, which were associated with worse sleep quality. In Study 2 ( N = 50, mean age = 21.2 years), we randomly assigned each participant to listen to lyrical or instrumental-only versions of popular songs before bed in a laboratory, discovering that instrumental music increased the incidence of nighttime earwormsmore »and worsened polysomnography-measured sleep quality. In both studies, earworms were experienced during awakenings, suggesting that the sleeping brain continues to process musical melodies. Study 3 substantiated this possibility by showing a significant increase in frontal slow oscillation activity, a marker of sleep-dependent memory consolidation. Thus, some types of music can disrupt nighttime sleep by inducing long-lasting earworms that are perpetuated by spontaneous memory-reactivation processes.« less
  4. Abstract Background How the brain develops accurate models of the external world and generates appropriate behavioral responses is a vital question of widespread multidisciplinary interest. It is increasingly understood that brain signal variability—posited to enhance perception, facilitate flexible cognitive representations, and improve behavioral outcomes—plays an important role in neural and cognitive development. The ability to perceive, interpret, and respond to complex and dynamic social information is particularly critical for the development of adaptive learning and behavior. Social perception relies on oxytocin-regulated neural networks that emerge early in development. Methods We tested the hypothesis that individual differences in the endogenous oxytocinergicmore »system early in life may influence social behavioral outcomes by regulating variability in brain signaling during social perception. In study 1, 55 infants provided a saliva sample at 5 months of age for analysis of individual differences in the oxytocinergic system and underwent electroencephalography (EEG) while listening to human vocalizations at 8 months of age for the assessment of brain signal variability. Infant behavior was assessed via parental report. In study 2, 60 infants provided a saliva sample and underwent EEG while viewing faces and objects and listening to human speech and water sounds at 4 months of age. Infant behavior was assessed via parental report and eye tracking. Results We show in two independent infant samples that increased brain signal entropy during social perception is in part explained by an epigenetic modification to the oxytocin receptor gene ( OXTR ) and accounts for significant individual differences in social behavior in the first year of life. These results are measure-, context-, and modality-specific: entropy, not standard deviation, links OXTR methylation and infant behavior; entropy evoked during social perception specifically explains social behavior only; and only entropy evoked during social auditory perception predicts infant vocalization behavior. Conclusions Demonstrating these associations in infancy is critical for elucidating the neurobiological mechanisms accounting for individual differences in cognition and behavior relevant to neurodevelopmental disorders. Our results suggest that an epigenetic modification to the oxytocin receptor gene and brain signal entropy are useful indicators of social development and may hold potential diagnostic, therapeutic, and prognostic value.« less
  5. This paper presents a deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation. Different from offline music generation and harmonization, online music accompaniment requires the algorithm to respond to human input and generate the machine counterpart in a sequential order. We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state). The key of this algorithm is the well-functioning reward model. Instead of defining it using music composition rules, we learn this model from monophonic and polyphonicmore »training data. This model considers the compatibility of the machine-generated note with both the machine-generated context and the human-generated context. Experiments show that this algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part. Subjective evaluations on preferences show that the proposed algorithm generates music pieces of higher quality than the baseline method.« less