The steelpan is a pitched percussion instrument that takes the form of a concave bowl with several localized dimpled regions of varying curvature. Each of these localized zones, called notes, can vibrate independently when struck, and produce a sustained tone of a well-defined pitch. While the association of the localized zones with individual notes has long been known and exploited, the relationship between the shell geometry and the strength of the mode confinement remains unclear. Here, we explore the spectral properties of the steelpan modelled as a vibrating elastic shell. To characterize the resulting eigenvalue problem, we generalize a recently developed theory of localization landscapes for scalar elliptic operators to the vector-valued case, and predict the location of confined eigenmodes by solving a Poisson problem. A finite-element discretization of the shell shows that the localization strength is determined by the difference in curvature between the note and the surrounding bowl. In addition to providing an explanation for how a steelpan operates as a two-dimensional xylophone, our study provides a geometric principle for designing localized modes in elastic shells.
more »
« less
Musical adaptation as phonological evidence: Case studies from textsetting, rhyme, and musical surrogates
- Award ID(s):
- 1664335
- PAR ID:
- 10166941
- Date Published:
- Journal Name:
- Language and Linguistics Compass
- Volume:
- 13
- Issue:
- 12
- ISSN:
- 1749-818X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Despite our intimate relationship with music in every-day life, we know little about how people create music. A particularly elusive area of study entails the spontaneous collaborative musical creation in the absence of rehearsals or scripts. Toward this aim, we designed an experiment in which pairs of players collaboratively created music in rhythmic improvisation. Rhythmic patterns and collaborative processes were investigated through symbolic-recurrence quantification and information theory, applied to the time series of the sound created by the players. Working with real data on collaborative rhythmic improvisation, we identified features of improvised music and elucidated underlying processes of collaboration. Players preferred certain patterns over others, and their musical experience drove musical collaboration when rhythmic improvisation started. These results unfold prevailing rhythmic features in collaborative music creation while informing the complex dynamics of the underlying processes.more » « less
-
The dominant research strategy within the field of music perception and cognition has typically involved new data collection and primary analysis techniques. As a result, numerous information-rich yet underexplored datasets exist in publicly accessible online repositories. In this paper we contribute two secondary analysis methodologies to overcome two common challenges in working with previously collected data: lack of participant stimulus ratings and lack of physiological baseline recordings. Specifically, we focus on methodologies that unlock previously unexplored musical preference questions. Preferred music plays important roles in our personal, social, and emotional well-being, and is capable of inducing emotions that result in psychophysiological responses. Therefore, we select the Study Forrest dataset “auditory perception” extension as a case study, which provides physiological and self-report demographics data for participants (N = 20) listening to clips from different musical genres. In Method 1, we quantitatively model self-report genre preferences using the MUSIC five-factor model: a tool recognized for genre-free characterization of musical preferences. In Method 2, we calculate synthetic baselines for each participant, allowing us to compare physiological responses (pulse and respiration) across individuals. With these methods, we uncover average changes in breathing rate as high as 4.8%, which correlate with musical genres in this dataset (p < .001). High-level musical characteristics from the MUSIC model (mellowness and intensity) further reveal a linear breathing rate trend among genres (p < .001). Although no causation can be inferred given the nature of the analysis, the significant results obtained demonstrate the potential for previous datasets to be more productively harnessed for novel research.more » « less
An official website of the United States government

