- Award ID(s):
- 1815322
- NSF-PAR ID:
- 10351620
- Date Published:
- Journal Name:
- IEEE Transactions on Information Forensics and Security
- Volume:
- 17
- ISSN:
- 1556-6013
- Page Range / eLocation ID:
- 2110 to 2124
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Inspired by earlier academic research, iOS app privacy labels and the recent Google Play data safety labels have been introduced as a way to systematically present users with concise summaries of an app’s data practices. Yet, little research has been conducted to determine how well today’s mobile app privacy labels address people’s actual privacy concerns or questions. We analyze a crowd-sourced corpus of privacy questions collected from mobile app users to determine to what extent these mobile app labels actually address users’ privacy concerns and questions. While there are differences between iOS labels and Google Play labels, our results indicate that an important percentage of people’s privacy questions are not answered or only partially addressed in today’s labels. Findings from this work not only shed light on the additional fields that would need to be included in mobile app privacy labels but can also help inform refinements to existing labels to better address users’ typical privacy questions.more » « less
-
Security protocols enable secure communication over insecure channels. Privacy proto- cols enable private interactions over secure channels. Security protocols set up secure channels using cryptographic primitives. Privacy protocols set up private channels using secure channels. But just like some security protocols can be broken without breaking the underlying cryptography, some privacy protocols can be broken without breaking the underlying security. Such privacy attacks have been used to leverage e-commerce against targeted advertising from the outset; but their depth and scope became appar- ent only with the overwhelming advent of influence campaigns in politics. The blurred boundaries between privacy protocols and privacy attacks present a new challenge for protocol analysis. Covert channels turn out to be concealed not only below overt chan- nels, but also above: subversions, and the level-below attacks are supplemented by sublimations and the level-above attackmore » « less
-
To quantify trade-offs between increasing demand for open data sharing and concerns about sensitive information disclosure, statistical data privacy (SDP) methodology analyzes data release mechanisms that sanitize outputs based on confidential data. Two dominant frameworks exist: statistical disclosure control (SDC) and the more recent differential privacy (DP). Despite framing differences, both SDC and DP share the same statistical problems at their core. For inference problems, either we may design optimal release mechanisms and associated estimators that satisfy bounds on disclosure risk measures, or we may adjust existing sanitized output to create new statistically valid and optimal estimators. Regardless of design or adjustment, in evaluating risk and utility, valid statistical inferences from mechanism outputs require uncertainty quantification that accounts for the effect of the sanitization mechanism that introduces bias and/or variance. In this review, we discuss the statistical foundations common to both SDC and DP, highlight major developments in SDP, and present exciting open research problems in private inference.