Privacy is a key principle for developing ethical AI technologies, but
how does including AI technologies in products and services change
privacy risks? We constructed a taxonomy of AI privacy risks by an-
alyzing 321 documented AI privacy incidents. We codifed how the
unique capabilities and requirements of AI technologies described
in those incidents generated new privacy risks, exacerbated known
ones, or otherwise did not meaningfully alter the risk. We present
12 high-level privacy risks that AI technologies either newly created
(e.g., exposure risks from deepfake pornography) or exacerbated
(e.g., surveillance risks from collecting training data). One upshot
of our work is that incorporating AI technologies into a product
can alter the privacy risks it entails. Yet, current approaches to
privacy-preserving AI/ML (e.g., federated learning, diferential pri-
vacy, checklists) only address a subset of the privacy risks arising
from the capabilities and data requirements of AI.
more »
« less
This content will become publicly available on August 14, 2025
“I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
How do practitioners who develop consumer AI products
scope, motivate, and conduct privacy work? Respecting pri-
vacy is a key principle for developing ethical, human-centered
AI systems, but we cannot hope to better support practitioners
without answers to that question. We interviewed 35 industry
AI practitioners to bridge that gap. We found that practitioners
viewed privacy as actions taken against pre-defined intrusions
that can be exacerbated by the capabilities and requirements
of AI, but few were aware of AI-specific privacy intrusions
documented in prior literature. We found that their privacy
work was rigidly defined and situated, guided by compliance
with privacy regulations and policies, and generally demoti-
vated beyond meeting minimum requirements. Finally, we
found that the methods, tools, and resources they used in their
privacy work generally did not help address the unique pri-
vacy risks introduced or exacerbated by their use of AI in
their products. Collectively, these findings reveal the need and
opportunity to create tools, resources, and support structures
to improve practitioners’ awareness of AI-specific privacy
risks, motivations to do AI privacy work, and ability to ad-
dress privacy harms introduced or exacerbated by their use of
AI in consumer products.
more »
« less
- Award ID(s):
- 2316768
- PAR ID:
- 10543400
- Publisher / Repository:
- USENIX Security
- Date Published:
- ISBN:
- 978-1-939133-44-1
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Reiter, Harvey L (Ed.)Adopting Artificial Intelligence (AI) in electric utilities signifies vast, yet largely untapped potential for accelerating a clean energy transition. This requires tackling complex challenges such as trustworthiness, explainability, pri- vacy, cybersecurity, and governance, balancing these against AI’s benefits. This article aims to facilitate dialogue among regulators, policymakers, utilities, and other stakeholders on navigating these complex issues, fostering a shared under- standing and approach to leveraging AI’s transformative power responsibly. The complex interplay of state and federal regulations necessitates careful coordina- tion, particularly as AI impacts energy markets and national security. Promoting data sharing with privacy and cybersecurity in mind is critical. The article advo- cates for ‘realistic open benchmarks’ to foster innovation without compromising confidentiality. Trustworthiness (the system’s ability to ensure reliability and per- formance, and to inspire confidence and transparency) and explainability (ensur- ing that AI decisions are understandable and accessible to a large diversity of par- ticipants) are fundamental for AI acceptance, necessitating transparent, accountable, and reliable systems. AI must be deployed in a way that helps keep the lights on. As AI becomes more involved in decision-making, we need to think about who’s responsible and what’s ethical. With the current state of the art, using generative AI for critical, near real-time decision-making should be approached carefully. While AI is advancing rapidly both in terms of technology and regula- tion, within and beyond the scope of energy specific applications, this article aims to provide timely insights and a common understanding of AI, its opportunities and challenges for electric utility use cases, and ultimately help advance its adop- tion in the power system sector, to accelerate the equitable clean energy transition.more » « less
-
Abstract: Health data is considered to be sensitive and personal; both governments and software platforms have enacted specific measures to protect it. Consumer apps that collect health data are becoming more popular, but raise new privacy concerns as they collect unnecessary data, share it with third parties, and track users. However, developers of these apps are not necessarily knowingly endangering users’ privacy; some may simply face challenges working with health features. To scope these challenges, we qualitatively analyzed 269 privacy-related posts on Stack Overflow by developers of health apps for Android- and iOS-based systems. We found that health-specific access control structures (e.g., enhanced requirements for permissions and authentication) underlie several privacy-related challenges developers face. The specific nature of problems often differed between the platforms, for example additional verification steps for Android developers, or confusing feedback about incorrectly formulated permission scopes for iOS. Developers also face problems introduced by third-party libraries. Official documentation plays a key part in understanding privacy requirements, but in some cases, may itself cause confusion. We discuss implications of our findings and propose ways to improve developers’ experience of working with health-related features -- and consequently to improve the privacy of their apps’ end users.more » « less
-
The central question studied in this paper is Rényi Differential Privacy (RDP) guarantees for general discrete local randomizers in the shuffle privacy model. In the shuffle model, each of the 𝑛 clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client. The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle pri- vacy model, and we develop new analysis techniques for deriving our results which could be of independent interest. In applications, such an RDP guarantee is most useful when we use it for composing several private interactions. We numerically demonstrate that, for important regimes, with composition our bound yields an improve- ment in privacy guarantee by a factor of 8× over the state-of-the-art approximate Differential Privacy (DP) guarantee (with standard composition) for shuffle models. Moreover, combining with Pois- son subsampling, our result leads to at least 10× improvement over subsampled approximate DP with standard composition.more » « less
-
With an increasing number of Internet of Things (IoT) devices present in homes, there is a rise in the number of potential infor- mation leakage channels and their associated security threats and privacy risks. Despite a long history of attacks on IoT devices in unprotected home networks, the problem of accurate, rapid detection and prevention of such attacks remains open. Many existing IoT protection solutions are cloud-based, sometimes ineffective, and might share consumer data with unknown third parties. This paper investigates the potential for effective IoT threat detection locally, on a home router, using AI tools combined with classic rule-based traffic-filtering algorithms. Our results show that with a slight rise of router hardware resources caused by machine learn- ing and traffic filtering logic, a typical home router instrumented with our solution is able to effectively detect risks and protect a typical home IoT network, equaling or outperforming existing popular solutions, with- out any effects on benign IoT functionality, and without relying on cloud services and third parties.more » « less