skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks
Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by an- alyzing 321 documented AI privacy incidents. We codifed how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, diferential pri- vacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI.  more » « less
Award ID(s):
2316768
PAR ID:
10543401
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400703300
Page Range / eLocation ID:
1 to 19
Format(s):
Medium: X
Location:
Honolulu HI USA
Sponsoring Org:
National Science Foundation
More Like this
  1. How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products. 
    more » « less
  2. What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human emotion and other affective phenomena. Industry, policy, and scientific actors debate potential benefits and harms, arguing for polarized futures ranging from panoptic expansion to complete bans. Emotion AI is proposed, deployed, and sometimes withdrawn in collaborative contexts such as education, hiring, healthcare, and service work. Proponents expound these technologies’ benefits for well-being and security, while critics decry privacy harms, civil liberties risks, bias, and shaky scientific foundations, and gaps between technologies’ capabilities and how they are marketed and legitimized. This panel brings diverse disciplinary perspectives into discussion about the history of emotions—as an example of ’intimate’ data—in computing, how emotion AI is legitimized, people’s experiences with and perceptions of emotion AI in social and collaborative settings, emotion AI’s development practices, and using design research to re-imagine emotion AI. These issues are relevant to the CSCW community in designing, evaluating, and regulating algorithmic sensing technologies including and beyond emotion-sensing. 
    more » « less
  3. Users often struggle to navigate the privacy / publicity boundary in sharing images online: they may lack awareness of image privacy risks and/or the ability to apply effective mitigation strategies. To address this challenge, we introduce and evaluate Imago Obscura, an AI-powered, image-editing copilot that enables users to identify and mitigate privacy risks with images they intend to share. Driven by design requirements from a formative user study with 7 imageediting experts, Imago Obscura enables users to articulate their image-sharing intent and privacy concerns. The system uses these inputs to surface contextually pertinent privacy risks, and then recommends and facilitates application of a suite of obfuscation techniques found to be effective in prior literature — e.g., inpainting, blurring, and generative content replacement. We evaluated Imago Obscura with 15 end-users in a lab study and found that it greatly improved users’ awareness of image privacy risks and their ability to address those risks, allowing them to make more informed sharing decisions. 
    more » « less
  4. This lightning talk addresses the pressing need to enhance cybersecurity measures for Hawaii's critical infrastructure, focusing particularly on healthcare and transportation sectors. These sectors have faced significant cybersecurity challenges, with Oahu's transportation services experiencing major breaches and healthcare institutions like Queen's Health System and Malama I Ke Ola suffering from ransomware attacks since 2021. These incidents have led to severe disruptions and compromised sensitive data. Hawaii's geographic isolation, natural disaster risks, legacy systems, and workforce shortages exacerbate these issues. Additionally, emerging technologies such as AI and IoT further expand vulnerabilities. A comprehensive cybersecurity strategy is essential to mitigate these risks. This talk introduces the concept of a volunteer-supported Human-AI Synergy Hotline, which provides proactive advice, crisis management, and emotional support during and after cyber incidents. This innovative approach aims to enhance cybersecurity preparedness and resilience in Hawaii's critical sectors. 
    more » « less
  5. There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences. 
    more » « less