skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: BachDuet: A Deep Learning System for Human-Machine Counterpoint Improvisation
During theBaroque period, improvisation was a key element of music performance and education. Great musicians, such as J.S. Bach, were better known as improvisers than composers. Today, however, there is a lack of improvisation culture in classical music performance and education; classical musicians either are not trained to improvise, or cannot find other people to improvise with. Motivated by this observation, we develop BachDuet, a system that enables real-time counterpoint improvisation between a human anda machine. This system uses a recurrent neural network toprocess the human musician’s monophonic performance ona MIDI keyboard and generates the machine’s monophonic performance in real time. We develop a GUI to visualize the generated music content and to facilitate this interaction. We conduct user studies with 13 musically trained users and show the feasibility of two-party duet counterpoint improvisation and the effectiveness of BachDuet for this purpose. We also conduct listening tests with 48 participants and show that they cannot tell the difference between duets generated by human-machine improvisation using BachDuet and those generated by human-human improvisation. Objective evaluation is also conducted to assess the degree to which these improvisations adhere to common rules of counterpoint, showing promising results.  more » « less
Award ID(s):
1922591
PAR ID:
10191375
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the International Conference on New Interfaces for Musical Expression
ISSN:
2220-4806
Page Range / eLocation ID:
635 - 640
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper presents a deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation. Different from offline music generation and harmonization, online music accompaniment requires the algorithm to respond to human input and generate the machine counterpart in a sequential order. We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state). The key of this algorithm is the well-functioning reward model. Instead of defining it using music composition rules, we learn this model from monophonic and polyphonic training data. This model considers the compatibility of the machine-generated note with both the machine-generated context and the human-generated context. Experiments show that this algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part. Subjective evaluations on preferences show that the proposed algorithm generates music pieces of higher quality than the baseline method. 
    more » « less
  2. Despite our intimate relationship with music in every-day life, we know little about how people create music. A particularly elusive area of study entails the spontaneous collaborative musical creation in the absence of rehearsals or scripts. Toward this aim, we designed an experiment in which pairs of players collaboratively created music in rhythmic improvisation. Rhythmic patterns and collaborative processes were investigated through symbolic-recurrence quantification and information theory, applied to the time series of the sound created by the players. Working with real data on collaborative rhythmic improvisation, we identified features of improvised music and elucidated underlying processes of collaboration. Players preferred certain patterns over others, and their musical experience drove musical collaboration when rhythmic improvisation started. These results unfold prevailing rhythmic features in collaborative music creation while informing the complex dynamics of the underlying processes. 
    more » « less
  3. Learner-centered interactions, whether in formal or informal settings, are by their nature unscripted and require both the educator and learner to improvise. In fact, improvisation skills have been recognized as beneficial and applied in a variety of professional development training programs (including science communication, organizational development in university administration, teambuilding and leadership in business, and communication skills in medical education); yet, their inclusion in educator training has been limited. MOXI and UCSB partnered with a professional actor and theater instructor (third author of this paper) to implement applied improvisation training to support informal educators' skills development. After four years of incorporating applied improvisation training in our facilitation training program, we have found that the basic skills of listening, observing, and responding that are critical in learner-centered education are taught effectively through the well-developed, practical, and fun exercises of improvisational theater. In this article, we describe our applied improvisation training and how it builds skills pertinent to implementing learner-centered facilitation, how graduates of our training program connected applied improvisation training to their facilitation, and how other institutions can incorporate it into preparing educators for working in either informal or formal settings. 
    more » « less
  4. Generative AI in music (GAIM) technologies are rapidly transforming music production, yet little is known about how working musicians perceive and respond to these changes. This study presents findings from in-depth interviews with 43 musicians, spanning diverse genres, professional roles and experience with music technology. Our analysis, informed by a reflexive thematic analysis approach, suggests complex tensions between perceived benefits and risks of GAIM adoption. Key themes were generated around tensions between (i) fear of reduced job opportunities for professional musicians and appreciation of the potential of AI to make individual musicians more independent and productive; (ii) fear about the exploitation of artists’ work and benefits of open music exchanges; (iii) fear that AI will exacerbate inequities and recognition of AI’s potential to increase access to music production. Our findings highlight the need for careful consideration of justice and fairness in GAIM development and deployment, suggesting that different types of GAIM use (from assistant to replacement) carry distinct ethical implications. This work provides a foundation for understanding how GAIMs can be integrated into music production while respecting artists’ rights and creative agency. 
    more » « less
  5. This methods paper presents the interview quality reflection tool (IQRT) to evaluate the quality of qualitative research interviews. Qualitative researchers commonly use semi-structured interviews that rely on the interviewers’ ability to improvise in real time based on the needs of the study. Given that interviewing involves numerous tacit skills that cannot be delineated by a simple written protocol, it is necessary that researchers develop interview competencies through practice and reflection. While prior literature on interviewing has often focused on developing interview protocols, we know little about how interviewers themselves may be trained to gather high-quality data. In this paper, we focus on how the IQRT may be used to guide the self-assessment of research interviews. We discuss how interviews are used in engineering education, how we developed and applied the IQRT, and how lessons learned through using this tool might lead to improved interviewing skills through careful examination of interview structure, content, and context within the mentoring process. 
    more » « less