skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a kth-moment bound on the Lipschitz constants of sample functions rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error G2⋅1n√+Gk⋅(d√nϵ)1−1k under (ϵ,δ)-approximate differential privacy, up to a mild $$\textup{polylog}(\frac{1}{\delta})$$ factor, where G22 and Gkk are the 2nd and kth moment bounds on sample Lipschitz constants, nearly-matching a lower bound of [Lowy and Razaviyayn 2023]. We further give a suite of private algorithms in the heavy-tailed setting which improve upon our basic result under additional assumptions, including an optimal algorithm under a known-Lipschitz constant assumption, a near-linear time algorithm for smooth functions, and an optimal linear time algorithm for smooth generalized linear models.  more » « less
Award ID(s):
2505865
PAR ID:
10631926
Author(s) / Creator(s):
; ;
Publisher / Repository:
https://doi.org/10.48550/arXiv.2406.02789
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper studies differentially private stochastic convex optimization (DP-SCO) in the presence of heavy-tailed gradients, where only a 𝑘 kth-moment bound on sample Lipschitz constants is assumed, instead of a uniform bound. The authors propose a reduction-based approach that achieves the first near-optimal error rates (up to logarithmic factors) in this setting. Specifically, under ( 𝜖 , 𝛿 ) (ϵ,δ)-approximate differential privacy, they achieve an error bound of 𝐺 2 𝑛 + 𝐺 𝑘 ⋅ ( 𝑑 𝑛 𝜖 ) 1 − 1 𝑘 , n ​ G 2 ​ ​ +G k ​ ⋅( nϵ d ​ ​ ) 1− k 1 ​ , up to a mild polylogarithmic factor in 1 𝛿 δ 1 ​ , where 𝐺 2 G 2 ​ and 𝐺 𝑘 G k ​ are the 2nd and 𝑘 kth moment bounds on sample Lipschitz constants. This nearly matches the lower bound established by Lowy and Razaviyayn (2023). Beyond the basic result, the authors introduce a suite of private algorithms that further improve performance under additional assumptions: an optimal algorithm under a known-Lipschitz constant, a near-linear time algorithm for smooth functions, and an optimal linear-time algorithm for smooth generalized linear models. 
    more » « less
  2. In this work, we revisit the generalization error of stochastic mirror descent for quadratically bounded losses studied in Telgarsky (2022). Quadratically bounded losses is a broad class of loss functions, capturing both Lipschitz and smooth functions, for both regression and classification problems. We study the high probability generalization for this class of losses on linear predictors in both realizable and non-realizable cases when the data are sampled IID or from a Markov chain. The prior work relies on an intricate coupling argument between the iterates of the original problem and those projected onto a bounded domain. This approach enables blackbox application of concentration inequalities, but also leads to suboptimal guarantees due in part to the use of a union bound across all iterations. In this work, we depart significantly from the prior work of Telgarsky (2022), and introduce a novel approach for establishing high probability generalization guarantees. In contrast to the prior work, our work directly analyzes the moment generating function of a novel supermartingale sequence and leverages the structure of stochastic mirror descent. As a result, we obtain improved bounds in all aforementioned settings. Specifically, in the realizable case and non-realizable case with light-tailed sub-Gaussian data, we improve the bounds by a $$\log T$$ factor, matching the correct rates of $1/T$ and $$1/\sqrt{T}$$, respectively. In the more challenging case of heavy-tailed polynomial data, we improve the existing bound by a $$\mathrm{poly}\ T$$ factor. 
    more » « less
  3. In this work, we revisit the generalization error of stochastic mirror descent for quadratically bounded losses studied in Telgarsky (2022). Quadratically bounded losses is a broad class of loss functions, capturing both Lipschitz and smooth functions, for both regression and classification problems. We study the high probability generalization for this class of losses on linear predictors in both realizable and non-realizable cases when the data are sampled IID or from a Markov chain. The prior work relies on an intricate coupling argument between the iterates of the original problem and those projected onto a bounded domain. This approach enables blackbox application of concentration inequalities, but also leads to suboptimal guarantees due in part to the use of a union bound across all iterations. In this work, we depart significantly from the prior work of Telgarsky (2022), and introduce a novel approach for establishing high probability generalization guarantees. In contrast to the prior work, our work directly analyzes the moment generating function of a novel supermartingale sequence and leverages the structure of stochastic mirror descent. As a result, we obtain improved bounds in all aforementioned settings. Specifically, in the realizable case and non-realizable case with light-tailed sub-Gaussian data, we improve the bounds by a $$\log T$$ factor, matching the correct rates of $1/T$ and $$1/\sqrt{T}$$, respectively. In the more challenging case of heavy-tailed polynomial data, we improve the existing bound by a $$\mathrm{poly}\ T$$ factor. 
    more » « less
  4. We study local filters for the Lipschitz property of real-valued functions f : V → [0,r], where the Lipschitz property is defined with respect to an arbitrary undirected graph G = (V, E ). We give nearly optimal local Lipschitz filters both with respect to ℓ1-distance and ℓ0-distance. Previous work only considered unbounded- range functions over [n]d. Jha and Raskhodnikova (SICOMP ‘13) gave an algorithm for such functions with lookup complexity exponential in d, which Awasthi et al. (ACM Trans. Comput. Theory) showed was necessary in this setting. We demonstrate that important applications of local Lipschitz filters can be accomplished with filters for functions whose range is bounded in [0,r]. For functions f : [n]d → [0,r], we achieve running time (dr log n )O (log r ) for the ℓ1-respecting filter and dO(r) polylog n for the ℓ0-respecting filter, thus circumventing the lower bound. Our local filters provide a novel Lipschitz extension that can be implemented locally. Furthermore, we show that our algorithms are nearly optimal in terms of the dependence on r for the domain {0,1}d, an important special case of the domain [n]d. In addition, our lower bound resolves an open question of Awasthi et al., removing one of the conditions necessary for their lower bound for general range. We prove our lower bound via a reduction from distribution-free Lipschitz testing and a new technique for proving hardness for adaptive algorithms. Finally, we provide two applications of our local filters to real-valued functions, with no restrictions on the range. In the first application, we use them in conjunction with the Laplace mechanism for differential privacy and noisy binary search to provide mechanisms for privately releasing outputs of black-box functions, even in the presence of malicious clients. In particular, our differentially private mechanism for arbitrary real-valued functions runs in time 2polylog min(r,nd ) and, for honest clients, has accuracy comparable to the Laplace mechanism for Lipschitz functions, up to a factor of O (log min(r,nd )). In the second application, we use our local filters to obtain the first nontrivial tolerant tester for the Lipschitz property. Our tester works for functions of the form f : {0,1}d → ℝ, makes queries, and has tolerance ratio 2.01. Our applications demonstrate that local filters for bounded-range functions can be applied to construct efficient algorithms for arbitrary real-valued functions. 
    more » « less
  5. We study offline reinforcement learning (RL) with heavy-tailed reward distribution and data corruption: (i) Moving beyond subGaussian reward distribution, we allow the rewards to have infinite variances; (ii) We allow corruptions where an attacker can arbitrarily modify a small fraction of the rewards and transitions in the dataset. We first derive a sufficient optimality condition for generalized Pessimistic Value Iteration (PEVI), which allows various estimators with proper confidence bounds and can be applied to multiple learning settings. In order to handle the data corruption and heavy-tailed reward setting, we prove that the trimmed-mean estimation achieves the minimax optimal error rate for robust mean estimation under heavy-tailed distributions. In the PEVI algorithm, we plug in the trimmed mean estimation and the confidence bound to solve the robust offline RL problem. Standard analysis reveals that data corruption induces a bias term in the suboptimality gap, which gives the false impression that any data corruption prevents optimal policy learning. By using the optimality condition for the generalized PEVI, we show that as long as the bias term is less than the ``action gap'', the policy returned by PEVI achieves the optimal value given sufficient data. 
    more » « less