skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: “I Don’t Know If We’re Doing Good. I Don’t Know If We’re Doing Bad”: Investigating How Practitioners Scope, Motivate, and Conduct Privacy Work When Developing AI Products
How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products.  more » « less
Award ID(s):
2316768
PAR ID:
10543400
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
USENIX Security
Date Published:
ISBN:
978-1-939133-44-1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by an- alyzing 321 documented AI privacy incidents. We codifed how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, diferential pri- vacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI. 
    more » « less
  2. There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences. 
    more » « less
  3. An emerging body of research indicates that ineffective cross-functional collaboration – the interdisciplinary work done by industry practitioners across roles – represents a major barrier to addressing issues of fairness in AI design and development. In this research, we sought to better understand practitioners’ current practices and tactics to enact cross-functional collaboration for AI fairness, in order to identify opportunities to support more effective collaboration. We conducted a series of interviews and design workshops with 23 industry practitioners spanning various roles from 17 companies. We found that practitioners engaged in bridging work to overcome frictions in understanding, contextualization, and evaluation around AI fairness across roles. In addition, in organizational contexts with a lack of resources and incentives for fairness work, practitioners often piggybacked on existing requirements (e.g., for privacy assessments) and AI development norms (e.g., the use of quantitative evaluation metrics), although they worry that these tactics may be fundamentally compromised. Finally, we draw attention to the invisible labor that practitioners take on as part of this bridging and piggybacking work to enact interdisciplinary collaboration for fairness. We close by discussing opportunities for both FAccT researchers and AI practitioners to better support cross-functional collaboration for fairness in the design and development of AI systems. 
    more » « less
  4. The increased use of smart home devices (SHDs) on short- term rental (STR) properties raises privacy concerns for guests. While previous literature identifies guests’ privacy concerns and the need to negotiate guests’ privacy prefer- ences with hosts, there is a lack of research from the hosts’ perspectives. This paper investigates if and how hosts con- sider guests’ privacy when using their SHDs on their STRs, to understand hosts’ willingness to accommodate guests’ pri- vacy concerns, a starting point for negotiation. We conducted online interviews with 15 STR hosts (e.g., Airbnb/Vrbo), find- ing that they generally use, manage, and disclose their SHDs in ways that protect guests’ privacy. However, hosts’ prac- tices fell short of their intentions because of competing needs and goals (i.e., protecting their property versus protecting guests’ privacy). Findings also highlight that hosts do not have proper support from the platforms on how to navigate these competing goals. Therefore, we discuss how to improve platforms’ guidelines/policies to prevent and resolve conflicts with guests and measures to increase engagement from both sides to set ground for negotiation. 
    more » « less
  5. Reiter, Harvey L (Ed.)
    Adopting Artificial Intelligence (AI) in electric utilities signifies vast, yet largely untapped potential for accelerating a clean energy transition. This requires tackling complex challenges such as trustworthiness, explainability, pri- vacy, cybersecurity, and governance, balancing these against AI’s benefits. This article aims to facilitate dialogue among regulators, policymakers, utilities, and other stakeholders on navigating these complex issues, fostering a shared under- standing and approach to leveraging AI’s transformative power responsibly. The complex interplay of state and federal regulations necessitates careful coordina- tion, particularly as AI impacts energy markets and national security. Promoting data sharing with privacy and cybersecurity in mind is critical. The article advo- cates for ‘realistic open benchmarks’ to foster innovation without compromising confidentiality. Trustworthiness (the system’s ability to ensure reliability and per- formance, and to inspire confidence and transparency) and explainability (ensur- ing that AI decisions are understandable and accessible to a large diversity of par- ticipants) are fundamental for AI acceptance, necessitating transparent, accountable, and reliable systems. AI must be deployed in a way that helps keep the lights on. As AI becomes more involved in decision-making, we need to think about who’s responsible and what’s ethical. With the current state of the art, using generative AI for critical, near real-time decision-making should be approached carefully. While AI is advancing rapidly both in terms of technology and regula- tion, within and beyond the scope of energy specific applications, this article aims to provide timely insights and a common understanding of AI, its opportunities and challenges for electric utility use cases, and ultimately help advance its adop- tion in the power system sector, to accelerate the equitable clean energy transition. 
    more » « less