null
(Ed.)
We consider the problem of minimizing a convex risk with stochastic subgradients guaranteeing $$\epsilon$$-locally differentially private ($$\epsilon$$-LDP). While it has been shown that stochastic optimization is possible with $$\epsilon$$-LDP via the standard SGD, its convergence rate largely depends on the learning rate, which must be tuned via repeated runs. Further, tuning is detrimental to privacy loss since it significantly increases the number of gradient requests. In this work, we propose BANCO (Betting Algorithm for Noisy COins), the first $$\epsilon$$-LDP SGD algorithm that essentially matches the convergence rate of the tuned SGD without any learning rate parameter, reducing privacy loss and saving privacy budget.
more »
« less