skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: INSPECTRE: Privately Estimating the Unseen
We develop differentially private methods for estimating various distributional properties. Given a sample from a discrete distribution p, some functional f, and accuracy and privacy parameters alpha and epsilon, the goal is to estimate f(p) up to accuracy alpha, while maintaining epsilon-differential privacy of the sample. We prove almost-tight bounds on the sample size required for this problem for several functionals of interest, including support size, support coverage, and entropy. We show that the cost of privacy is negligible in a variety of settings, both theoretically and experimentally. Our methods are based on a sensitivity analysis of several state-of-the-art methods for estimating these properties with sublinear sample complexities.  more » « less
Award ID(s):
1657471
PAR ID:
10101014
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of Machine Learning Research
Volume:
80
ISSN:
2640-3498
Page Range / eLocation ID:
30-39
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We develop differentially private methods for estimating various distributional properties. Given a sample from a discrete distribution p, some functional f, and accuracy and privacy parameters alpha and epsilon, the goal is to estimate f(p) up to accuracy alpha, while maintaining epsilon-differential privacy of the sample. We prove almost-tight bounds on the sample size required for this problem for several functionals of interest, including support size, support coverage, and entropy. We show that the cost of privacy is negligible in a variety of settings, both theoretically and experimentally. Our methods are based on a sensitivity analysis of several state-of-the-art methods for estimating these properties with sublinear sample complexities 
    more » « less
  2. Weller, Adrian (Ed.)
    Differential privacy (DP) offers strong theoretical privacy guarantees, though implementations of DP mechanisms may be vulnerable to side-channel attacks, such as timing attacks. When sampling methods such as MCMC or rejection sampling are used to implement a mechanism, the runtime can leak private information. We characterize the additional privacy cost due to the runtime of a rejection sampler in terms of both (epsilon,delta)-DP as well as f-DP. We also show that unless the acceptance probability is constant across databases, the runtime of a rejection sampler does not satisfy epsilon-DP for any epsilon. We show that there is a similar breakdown in privacy with adaptive rejection samplers. We propose three modifications to the rejection sampling algorithm, with varying assumptions, to protect against timing attacks by making the runtime independent of the data. The modification with the weakest assumptions is an approximate sampler, introducing a small increase in the privacy cost, whereas the other modifications give perfect samplers. We also use our techniques to develop an adaptive rejection sampler for log-Holder densities, which also has data-independent runtime. We give several examples of DP mechanisms that fit the assumptions of our methods and can thus be implemented using our samplers. 
    more » « less
  3. In differentially private stochastic gradient descent (DPSGD), gradient clipping and random noise addition disproportionately affect underrepresented and complex classes and subgroups. As a consequence, DPSGD has disparate impact: the accuracy of a model trained using DPSGD tends to decrease more on these classes and subgroups vs. the original, non-private model. If the original model is unfair in the sense that its accuracy is not the same across all subgroups, DPSGD exacerbates this unfairness. In this work, we study the inequality in utility loss due to differential privacy, which compares the changes in prediction accuracy w.r.t. each group between the private model and the non-private model. We analyze the cost of privacy w.r.t. each group and explain how the group sample size along with other factors is related to the privacy impact on group accuracy. Furthermore, we propose a modified DPSGD algorithm, called DPSGD-F, to achieve differential privacy, equal costs of differential privacy, and good utility. DPSGD-F adaptively adjusts the contribution of samples in a group depending on the group clipping bias such that differential privacy has no disparate impact on group accuracy. Our experimental evaluation shows the effectiveness of our removal algorithm on achieving equal costs of differential privacy with satisfactory utility. 
    more » « less
  4. We consider the problem of estimating the spectral density of the normalized adjacency matrix of an $$n$$-node undirected graph. We provide a randomized algorithm that, with $$O(n\epsilon^{-2})$$ queries to a degree and neighbor oracle and in $$O(n\epsilon^{-3})$$ time, estimates the spectrum up to $$\epsilon$$ accuracy in the Wasserstein-1 metric. This improves on previous state-of-the-art methods, including an $$O(n\epsilon^{-7})$$ time algorithm from [Braverman et al., STOC 2022] and, for sufficiently small $$\epsilon$$, a $$2^{O(\epsilon^{-1})}$$ time method from [Cohen-Steiner et al., KDD 2018]. To achieve this result, we introduce a new notion of graph sparsification, which we call \emph{nuclear sparsification}. We provide an $$O(n\epsilon^{-2})$$-query and $$O(n\epsilon^{-2})$$-time algorithm for computing $$O(n\epsilon^{-2})$$-sparse nuclear sparsifiers. We show that this bound is optimal in both its sparsity and query complexity, and we separate our results from the related notion of additive spectral sparsification. Of independent interest, we show that our sparsification method also yields the first \emph{deterministic} algorithm for spectral density estimation that scales linearly with $$n$$ (sublinear in the representation size of the graph). 
    more » « less
  5. We develop differentially private hypothesis testing methods for the small sample regime. Given a sample  from a categorical distribution p over some domain Σ, an explicitly described distribution q over Σ, some privacy parameter ε, accuracy parameter α, and requirements βI and βII for the type I and type II errors of our test, the goal is to distinguish between p=q and dTV(p,q)≥α. We provide theoretical bounds for the sample size || so that our method both satisfies (ε,0)-differential privacy, and guarantees βI and βII type I and type II errors. We show that differential privacy may come for free in some regimes of parameters, and we always beat the sample complexity resulting from running the χ2-test with noisy counts, or standard approaches such as repetition for endowing non-private χ2-style statistics with differential privacy guarantees. We experimentally compare the sample complexity of our method to that of recently proposed methods for private hypothesis testing. 
    more » « less