skip to main content


Title: Test-Time Adaptation Toward Personalized Speech Enhancement: Zero-Shot Learning with Knowledge Distillation
In realistic speech enhancement settings for end-user devices, we often encounter only a few speakers and noise types that tend to reoccur in the specific acoustic environment. We propose a novel personalized speech enhancement method to adapt a compact denoising model to the test-time specificity. Our goal in this test-time adaptation is to utilize no clean speech target of the test speaker, thus fulfilling the requirement for zero-shot learning. To complement the lack of clean speech, we employ the knowledge distillation framework: we distill the more advanced denoising results from an overly large teacher model, and use them as the pseudo target to train the small student model. This zero-shot learning procedure circumvents the process of collecting users' clean speech, a process that users are reluctant to comply due to privacy concerns and technical difficulty of recording clean voice. Experiments on various test-time conditions show that the proposed personalization method can significantly improve the compact models' performance during the test time. Furthermore, since the personalized models outperform larger non-personalized baseline models, we claim that personalization achieves model compression with no loss of denoising performance. As expected, the student models underperform the state-of-the-art teacher models.  more » « less
Award ID(s):
2046963
NSF-PAR ID:
10318278
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Training personalized speech enhancement models is innately a no-shot learning problem due to privacy constraints and limited access to noise-free speech from the target user. If there is an abundance of unlabeled noisy speech from the test-time user, one may train a personalized speech enhancement model using self-supervised learning. One straightforward approach to model personalization is to use the target speaker’s noisy recordings as pseudo-sources. Then, a pseudo denoising model learns to remove injected training noises and recover the pseudo-sources. However, this approach is volatile as it depends on the quality of the pseudo-sources, which may be too noisy. To remedy this, we propose a data purification step that refines the self-supervised approach. We first train an SNR predictor model to estimate the frame-by-frame SNR of the pseudo- sources. Then, we convert the predictor’s estimates into weights that adjust the pseudo-sources’ frame-by-frame contribution to- wards training the personalized model. We empirically show that the proposed data purification step improves the usability of the speaker-specific noisy data in the context of personalized speech enhancement. Our approach may be seen as privacy-preserving as it does not rely on any clean speech recordings or speaker embeddings. 
    more » « less
  2. This paper presents a novel zero-shot learning approach towards personalized speech enhancement through the use of a sparsely active ensemble model. Optimizing speech denoising systems towards a particular test-time speaker can improve performance and reduce run-time complexity. However, test-time model adaptation may be challenging if collecting data from the test-time speaker is not possible. To this end, we propose using an ensemble model wherein each specialist module denoises noisy utterances from a distinct partition of training set speakers. The gating module inexpensively estimates test-time speaker characteristics in the form of an embedding vector and selects the most appropriate specialist module for denoising the test signal. Grouping the training set speakers into non-overlapping semantically similar groups is non-trivial and ill-defined. To do this, we first train a Siamese network using noisy speech pairs to maximize or minimize the similarity of its output vectors depending on whether the utterances derive from the same speaker or not. Next, we perform k-means clustering on the latent space formed by the averaged embedding vectors per training set speaker. In this way, we designate speaker groups and train specialist modules optimized around partitions of the complete training set. Our experiments show that ensemble models made up of low-capacity specialists can outperform high-capacity generalist models with greater efficiency and improved adaptation towards unseen test-time speakers. 
    more » « less
  3. null (Ed.)
    Across a wide variety of domains, artificial agents that can adapt and personalize to users have potential to improve and transform how social services are provided. Because of the need for personalized interaction data to drive this process, long-term (or longitudinal) interactions between users and agents, which unfold over a series of distinct interaction sessions, have attracted substantial research interest. In recognition of the expanded scope and structure of a long-term interaction, researchers are also adjusting the personalization models and algorithms used, orienting toward “continual learning” methods, which do not assume a stationary modeling target and explicitly account for the temporal context of training data. In parallel, researchers have also studied the effect of “multitask personalization,” an approach in which an agent interacts with users over multiple different tasks contexts throughout the course of a long-term interaction and learns personalized models of a user that are transferrable across these tasks. In this paper, we unite these two paradigms under the framework of “Lifelong Personalization,” analyzing the effect of multitask personalization applied to dynamic, non-stationary targets. We extend the multi-task personalization approach to the more complex and realistic scenario of modeling dynamic learners over time, focusing in particular on interactive scenarios in which the modeling agent plays an active role in teaching the student whose knowledge the agent is simultaneously attempting to model. Inspired by the way in which agents use active learning to select new training data based on domain context, we augment a Gaussian Process-based multitask personalization model with a mechanism to actively and continually manage its own training data, allowing a modeling agent to remove or reduce the weight of observed data from its training set, based on interactive context cues. We evaluate this method in a series of simulation experiments comparing different approaches to continual and multitask learning on simulated student data. We expect this method to substantially improve learning in Gaussian Process models in dynamic domains, establishing Gaussian Processes as another flexible modeling tool for Long-term Human-Robot Interaction (HRI) Studies. 
    more » « less
  4. To help facilitate play and learning, game-based educational activities often feature a computational agent as a co-player. Personalizing this agent's behavior to the student player is an active area of research, and prior work has demonstrated the benefits of personalized educational interaction across a variety of domains. A critical research challenge for personalized educational agents is real-time student modeling. Most student models are designed for and trained on only a single task, which limits the variety, flexibility, and efficiency of student player model learning. In this paper we present a research project applying transfer learning methods to student player models over different educational tasks, studying the effects of an algorithmic "multi-task personalization" approach on the accuracy and data efficiency of student model learning. We describe a unified robotic game system for studying multi-task personalization over two different educational games, each emphasizing early language and literacy skills such as rhyming and spelling. We present a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game's learned student model to the other via a novel instance-weighting protocol based on task similarity. We present results from a simulation-based investigation of the impact of multi-task personalization, establishing the core viability and benefits of transferrable student models and outlining new questions for future in-person research. 
    more » « less
  5. null (Ed.)
    To help facilitate play and learning, game-based educational activities often feature a computational agent as a co-player. Personalizing this agent's behavior to the student player is an active area of research, and prior work has demonstrated the benefits of personalized educational interaction across a variety of domains. A critical research challenge for personalized educational agents is real-time student modeling. Most student models are designed for and trained on only a single task, which limits the variety, flexibility, and efficiency of student player model learning. In this paper we present a research project applying transfer learning methods to student player models over different educational tasks, studying the effects of an algorithmic "multi-task personalization" approach on the accuracy and data efficiency of student model learning. We describe a unified robotic game system for studying multi-task personalization over two different educational games, each emphasizing early language and literacy skills such as rhyming and spelling. We present a flexible Gaussian Process-based approach for rapidly learning student models from interactive play in each game, and a method for transferring each game's learned student model to the other via a novel instance-weighting protocol based on task similarity. We present results from a simulation-based investigation of the impact of multi-task personalization, establishing the core viability and benefits of transferrable student models and outlining new questions for future in-person research. 
    more » « less