skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Revisiting Data-Free Knowledge Distillation with Poisoned Teachers
Data-free knowledge distillation (KD) helps transfer knowledge from a pre-trained model (known as the teacher model) to a smaller model (known as the student model) without access to the original training data used for training the teacher model. However, the security of the synthetic or out-of-distribution (OOD) data required in data-free KD is largely unknown and under-explored. In this work, we make the first effort to uncover the security risk of data-free KD w.r.t. untrusted pre-trained models. We then propose Anti-Backdoor Data-Free KD (ABD), the first plug-in defensive method for data-free KD methods to mitigate the chance of potential backdoors being transferred. We empirically evaluate the effectiveness of our proposed ABD in diminishing transferred backdoor knowledge while maintaining compatible downstream performances as the vanilla KD. We envision this work as a milestone for alarming and mitigating the potential backdoors in data-free KD. Codes are released at https://github.com/illidanlab/ABD .  more » « less
Award ID(s):
2212174 1749940
PAR ID:
10460848
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Proceedings of the 40th International Conference on Machine Learning
Volume:
202
Page Range / eLocation ID:
13199-13212
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Knowledge Distillation (KD) is a widely used technique to transfer knowledge from pre-trained teacher models to (usually more lightweight) student models. However, in certain situations, this technique is more of a curse than a blessing. For instance, KD poses a potential risk of exposing intellectual properties (IPs): even if a trained machine learning model is released in ``black boxes'' (e.g., as executable software or APIs without open-sourcing code), it can still be replicated by KD through imitating input-output behaviors. To prevent this unwanted effect of KD, this paper introduces and investigates a concept called : a specially trained teacher network that yields nearly the same performance as a normal one, but would significantly degrade the performance of student models learned by imitating it. We propose a simple yet effective algorithm to build the nasty teacher, called . Specifically, we aim to maximize the difference between the output of the nasty teacher and a normal pre-trained network. Extensive experiments on several datasets demonstrate that our method is effective on both standard KD and data-free KD, providing the desirable KD-immunity to model owners for the first time. We hope our preliminary study can draw more awareness and interest in this new practical problem of both social and legal importance. Our codes and pre-trained models can be found at: $$\url{https://github.com/VITA-Group/Nasty-Teacher}$$. 
    more » « less
  2. Due to its decentralized nature, Federated Learning (FL) lends itself to adversarial attacks in the form of backdoors during training. The goal of a backdoor is to corrupt the performance of the trained model on specific sub-tasks (e.g., by classifying green cars as frogs). A range of FL backdoor attacks have been introduced in the literature, but also methods to defend against them, and it is currently an open question whether FL systems can be tailored to be robust against backdoors. In this work, we provide evidence to the contrary. We first establish that, in the general case, robustness to backdoors implies model robustness to adversarial examples, a major open problem in itself. Furthermore, detecting the presence of a backdoor in a FL model is unlikely assuming first order oracles or polynomial time. We couple our theoretical results with a new family of backdoor attacks, which we refer to as edge-case backdoors. An edge-case backdoor forces a model to misclassify on seemingly easy inputs that are however unlikely to be part of the training, or test data, i.e., they live on the tail of the input distribution. We explain how these edge-case backdoors can lead to unsavory failures and may have serious repercussions on fairness, and exhibit that with careful tuning at the side of the adversary, one can insert them across a range of machine learning tasks (e.g., image classification, OCR, text prediction, sentiment analysis). 
    more » « less
  3. Abstract Objective . Deep-learning (DL)-based dose engines have been developed to alleviate the intrinsic compromise between the calculation accuracy and efficiency of the traditional dose calculation algorithms. However, current DL-based engines typically possess high computational complexity and require powerful computing devices. Therefore, to mitigate their computational burdens and broaden their applicability to a clinical setting where resource-limited devices are available, we proposed a compact dose engine via knowledge distillation (KD) framework that offers an ultra-fast calculation speed with high accuracy for prostate Volumetric Modulated Arc Therapy (VMAT). Approach . The KD framework contains two sub-models: a large pre-trained teacher and a small to-be-trained student. The student receives knowledge transferred from the teacher for better generalization. The trained student serves as the final engine for dose calculation. The model input is patient computed tomography and VMAT dose in water, and the output is DL-calculated patient dose. The ground-truth \dose was computed by the Monte Carlo module of the Monaco treatment planning system. Twenty and ten prostate cases were included for model training and assessment, respectively. The model’s performance (teacher/student/student-only) was evaluated by Gamma analysis and inference efficiency. Main results . The dosimetric comparisons (input/DL-calculated/ground-truth doses) suggest that the proposed engine can effectively convert low-accuracy doses in water to high-accuracy patient doses. The Gamma passing rate (2%/2 mm, 10% threshold) between the DL-calculated and ground-truth doses was 98.64 ± 0.62% (teacher), 98.13 ± 0.76% (student), and 96.95 ± 1.02% (student-only). The inference time was 16 milliseconds (teacher) and 11 milliseconds (student/student-only) using a graphics processing unit device, while it was 936 milliseconds (teacher) and 374 milliseconds (student/student-only) using a central processing unit device. Significance . With the KD framework, a compact dose engine can achieve comparable accuracy to that of a larger one. Its compact size reduces the computational burdens and computing device requirements, and thus such an engine can be more clinically applicable. 
    more » « less
  4. null (Ed.)
    We investigate a new method for injecting backdoors into machine learning models, based on compromising the loss-value computation in the model-training code. We use it to demonstrate new classes of backdoors strictly more powerful than those in the prior literature: single-pixel and physical backdoors in ImageNet models, backdoors that switch the model to a covert, privacy-violating task, and backdoors that do not require inference-time input modifications. Our attack is blind: the attacker cannot modify the training data, nor observe the execution of his code, nor access the resulting model. The attack code creates poisoned training inputs "on the fly," as the model is training, and uses multi-objective optimization to achieve high accuracy on both the main and backdoor tasks. We show how a blind attack can evade any known defense and propose new ones. 
    more » « less
  5. A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set. The watermark does not impact the test-time performance of the model on typical data; however, the model reliably errs on watermarked examples. To gain a better foundational understanding of backdoor data poisoning attacks, we present a formal theoretical framework within which one can discuss backdoor data poisoning attacks for classification problems. We then use this to analyze important statistical and computational issues surrounding these attacks. On the statistical front, we identify a parameter we call the memorization capacity that captures the intrinsic vulnerability of a learning problem to a backdoor attack. This allows us to argue about the robustness of several natural learning problems to backdoor attacks. Our results favoring the attacker involve presenting explicit constructions of backdoor attacks, and our robustness results show that some natural problem settings cannot yield successful backdoor attacks. From a computational standpoint, we show that under certain assumptions, adversarial training can detect the presence of backdoors in a training set. We then show that under similar assumptions, two closely related problems we call backdoor filtering and robust generalization are nearly equivalent. This implies that it is both asymptotically necessary and sufficient to design algorithms that can identify watermarked examples in the training set in order to obtain a learning algorithm that both generalizes well to unseen data and is robust to backdoors. 
    more » « less