To quantify trade-offs between increasing demand for open data sharing and concerns about sensitive information disclosure, statistical data privacy (SDP) methodology analyzes data release mechanisms that sanitize outputs based on confidential data. Two dominant frameworks exist: statistical disclosure control (SDC) and the more recent differential privacy (DP). Despite framing differences, both SDC and DP share the same statistical problems at their core. For inference problems, either we may design optimal release mechanisms and associated estimators that satisfy bounds on disclosure risk measures, or we may adjust existing sanitized output to create new statistically valid and optimal estimators. Regardless of design or adjustment, in evaluating risk and utility, valid statistical inferences from mechanism outputs require uncertainty quantification that accounts for the effect of the sanitization mechanism that introduces bias and/or variance. In this review, we discuss the statistical foundations common to both SDC and DP, highlight major developments in SDP, and present exciting open research problems in private inference.
more »
« less
Finding ε and δ of Traditional Disclosure Control Systems
This paper analyzes the privacy of traditional Statistical Disclosure Control (SDC) systems under a differential privacy interpretation. SDCs, such as cell suppression and swapping, promise to safeguard the confidentiality of data and are routinely adopted in data analyses with profound societal and economic impacts. Through a formal analysis and empirical evaluation of demographic data from real households in the U.S., the paper shows that widely adopted SDC systems not only induce vastly larger privacy losses than classical differential privacy mechanisms, but, they may also come at a cost of larger accuracy and fairness.
more »
« less
- PAR ID:
- 10494481
- Publisher / Repository:
- AAAI Conference on Artificial Intelligence (AAAI-24)
- Date Published:
- Journal Name:
- Proceedings of the AAAI Conference on Artificial Intelligence
- Volume:
- 38
- Issue:
- 20
- ISSN:
- 2159-5399
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
N/A (Ed.)Post-processing immunity is a fundamental property of differential privacy: it enables the application of arbitrary data-independent transformations to the results of differentially private outputs without affecting their privacy guarantees. When query outputs must satisfy domain constraints, post-processing can be used to project them back onto the feasibility region. Moreover, when the feasible region is convex, a widely adopted class of post-processing steps is also guaranteed to improve accuracy. Post-processing has been applied successfully in many applications including census data, energy systems, and mobility. However, its effects on the noise distribution is poorly understood: It is often argued that post-processing may introduce bias and increase variance. This paper takes a first step towards understanding the properties of post-processing. It considers the release of census data and examines, both empirically and theoretically, the behavior of a widely adopted class of post-processing functions.more » « less
-
With the proliferation of Beyond 5G (B5G) communication systems and heterogeneous networks, mobile broadband users are generating massive volumes of data that undergo fast processing and computing to obtain actionable insights. While analyzing this huge amount of data typically involves machine and deep learning-based data-driven Artificial Intelligence (AI) models, a key challenge arises in terms of providing privacy assurances for user-generated data. Even though data-driven techniques have been widely utilized for network traffic analysis and other network management tasks, researchers have also identified that applying AI techniques may often lead to severe privacy concerns. Therefore, the concept of privacy-preserving data-driven learning models has recently emerged as a hot area of research to facilitate model training on large-scale datasets while guaranteeing privacy along with the security of the data. In this paper, we first demonstrate the research gap in this domain, followed by a tutorial-oriented review of data-driven models, which can be potentially mapped to privacy-preserving techniques. Then, we provide preliminaries of a number of privacy-preserving techniques (e.g., differential privacy, functional encryption, Homomorphic encryption, secure multi-party computation, and federated learning) that can be potentially adopted for emerging communication networks. The provided preliminaries enable us to showcase the subset of data-driven privacy-preserving models, which are gaining traction in emerging communication network systems. We provide a number of relevant networking use cases, ranging from the B5G core and Radio Access Networks (RANs) to semantic communications, adopting privacy-preserving data-driven models. Based on the lessons learned from the pertinent use cases, we also identify several open research challenges and hint toward possible solutions.more » « less
-
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP). Standard differential privacy (DP) gives us a worst-case bound that might be orders of magnitude larger than the privacy loss to a particular individual relative to a fixed dataset. The pDP framework provides a more fine-grained analysis of the privacy guarantee to a target individual, but the per-instance privacy loss itself might be a function of sensitive data. In this paper, we analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.more » « less
-
Despite recent widespread deployment of differential privacy, relatively little is known about what users think of differential privacy. In this work, we seek to explore users' privacy expectations related to differential privacy. Specifically, we investigate (1) whether users care about the protections afforded by differential privacy, and (2) whether they are therefore more willing to share their data with differentially private systems. Further, we attempt to understand (3) users' privacy expectations of the differentially private systems they may encounter in practice and (4) their willingness to share data in such systems. To answer these questions, we use a series of rigorously conducted surveys (n=2424). We find that users care about the kinds of information leaks against which differential privacy protects and are more willing to share their private information when the risks of these leaks are less likely to happen. Additionally, we find that the ways in which differential privacy is described in-the-wild haphazardly set users' privacy expectations, which can be misleading depending on the deployment. We synthesize our results into a framework for understanding a user's willingness to share information with differentially private systems, which takes into account the interaction between the user's prior privacy concerns and how differential privacy is described.more » « less
An official website of the United States government

