Data deletion algorithms aim to remove the influence of deleted data points from trained models at a cheaper computational cost than fully retraining those models. However, for sequences of deletions, most prior work in the non-convex setting gives valid guarantees only for sequences that are chosen independently of the models that are published. If people choose to delete their data as a function of the published models (because they don't like what the models reveal about them, for example), then the update sequence is adaptive. In this paper, we give a general reduction from deletion guarantees against adaptive sequences to deletion guarantees against non-adaptive sequences, using differential privacy and its connection to max information. Combined with ideas from prior work which give guarantees for non-adaptive deletion sequences, this leads to extremely flexible algorithms able to handle arbitrary model classes and training methodologies, giving strong provable deletion guarantees for adaptive deletion sequences. We show in theory how prior work for non-convex models fails against adaptive deletion sequences, and use this intuition to design a practical attack against the SISA algorithm of Bourtoule et al. [2021] on CIFAR-10, MNIST, Fashion-MNIST.
more »
« less
This content will become publicly available on December 15, 2025
Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
- Award ID(s):
- 2402816
- PAR ID:
- 10569566
- Publisher / Repository:
- Openreview
- Date Published:
- Format(s):
- Medium: X
- Location:
- NeurIPS 2024 (spotlight)
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
As intelligent agents become autonomous over longer periods of time, they may eventually be- come lifelong counterparts to specific people. If so, it may be common for a user to want the agent to master a task temporarily but later on to forget the task due to privacy concerns. How- ever enabling an agent to forget privately what the user specified without degrading the rest of the learned knowledge is a challenging problem. With the aim of addressing this challenge, this paper formalizes this continual learning and private unlearning (CLPU) problem. The pa- per further introduces a straightforward but exactly private solution, CLPU-DER++, as the first step towards solving the CLPU problem, along with a set of carefully designed benchmark prob- lems to evaluate the effectiveness of the proposed solution.more » « less