skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy
Award ID(s):
2015378 2015259
PAR ID:
10320377
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
The Annals of Statistics
Volume:
49
Issue:
5
ISSN:
0090-5364
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. To quantify trade-offs between increasing demand for open data sharing and concerns about sensitive information disclosure, statistical data privacy (SDP) methodology analyzes data release mechanisms that sanitize outputs based on confidential data. Two dominant frameworks exist: statistical disclosure control (SDC) and the more recent differential privacy (DP). Despite framing differences, both SDC and DP share the same statistical problems at their core. For inference problems, either we may design optimal release mechanisms and associated estimators that satisfy bounds on disclosure risk measures, or we may adjust existing sanitized output to create new statistically valid and optimal estimators. Regardless of design or adjustment, in evaluating risk and utility, valid statistical inferences from mechanism outputs require uncertainty quantification that accounts for the effect of the sanitization mechanism that introduces bias and/or variance. In this review, we discuss the statistical foundations common to both SDC and DP, highlight major developments in SDP, and present exciting open research problems in private inference. 
    more » « less
  2. Towards the vision of building artificial intelligence systems that can assist with our everyday life, we introduce a proof of concept for a social media privacy "cyborg" which can locally and privately monitor a person's published content and offer advice or warnings when their privacy is at stake. The idea of a cyborg can be more general, as a separate local entity with its own computational resources, that can automatically perform several online tasks on our behalf. For this demonstration, we assume an attacker that can successfully infer user attributes, solely based on what the user has published (topic-based inference). We focus on Social Media privacy and specifically on the issue of exposing sensitive user-attributes, like location, or race, through published content. We built a privacy cyborg that can monitor a user's posted topics and automatically warn them in real time when a sensitive attribute is at risk of being exposed. 
    more » « less