How do practitioners who develop consumer AI products scope, motivate, and conduct privacy work? Respecting pri- vacy is a key principle for developing ethical, human-centered AI systems, but we cannot hope to better support practitioners without answers to that question. We interviewed 35 industry AI practitioners to bridge that gap. We found that practitioners viewed privacy as actions taken against pre-defined intrusions that can be exacerbated by the capabilities and requirements of AI, but few were aware of AI-specific privacy intrusions documented in prior literature. We found that their privacy work was rigidly defined and situated, guided by compliance with privacy regulations and policies, and generally demoti- vated beyond meeting minimum requirements. Finally, we found that the methods, tools, and resources they used in their privacy work generally did not help address the unique pri- vacy risks introduced or exacerbated by their use of AI in their products. Collectively, these findings reveal the need and opportunity to create tools, resources, and support structures to improve practitioners’ awareness of AI-specific privacy risks, motivations to do AI privacy work, and ability to ad- dress privacy harms introduced or exacerbated by their use of AI in consumer products. 
                        more » 
                        « less   
                    
                            
                            Deepfakes, Phrenology, Surveillance, and More! A Taxonomy of AI Privacy Risks
                        
                    
    
            Privacy is a key principle for developing ethical AI technologies, but how does including AI technologies in products and services change privacy risks? We constructed a taxonomy of AI privacy risks by an- alyzing 321 documented AI privacy incidents. We codifed how the unique capabilities and requirements of AI technologies described in those incidents generated new privacy risks, exacerbated known ones, or otherwise did not meaningfully alter the risk. We present 12 high-level privacy risks that AI technologies either newly created (e.g., exposure risks from deepfake pornography) or exacerbated (e.g., surveillance risks from collecting training data). One upshot of our work is that incorporating AI technologies into a product can alter the privacy risks it entails. Yet, current approaches to privacy-preserving AI/ML (e.g., federated learning, diferential pri- vacy, checklists) only address a subset of the privacy risks arising from the capabilities and data requirements of AI. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2316768
- PAR ID:
- 10543401
- Publisher / Repository:
- ACM
- Date Published:
- ISBN:
- 9798400703300
- Page Range / eLocation ID:
- 1 to 19
- Format(s):
- Medium: X
- Location:
- Honolulu HI USA
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            What should we do with emotion AI? Should we regulate, ban, promote, or re-imagine it? Emotion AI, a class of affective computing technologies used in personal and social computing, comprises emergent and controversial techniques aiming to classify human emotion and other affective phenomena. Industry, policy, and scientific actors debate potential benefits and harms, arguing for polarized futures ranging from panoptic expansion to complete bans. Emotion AI is proposed, deployed, and sometimes withdrawn in collaborative contexts such as education, hiring, healthcare, and service work. Proponents expound these technologies’ benefits for well-being and security, while critics decry privacy harms, civil liberties risks, bias, and shaky scientific foundations, and gaps between technologies’ capabilities and how they are marketed and legitimized. This panel brings diverse disciplinary perspectives into discussion about the history of emotions—as an example of ’intimate’ data—in computing, how emotion AI is legitimized, people’s experiences with and perceptions of emotion AI in social and collaborative settings, emotion AI’s development practices, and using design research to re-imagine emotion AI. These issues are relevant to the CSCW community in designing, evaluating, and regulating algorithmic sensing technologies including and beyond emotion-sensing.more » « less
- 
            This lightning talk addresses the pressing need to enhance cybersecurity measures for Hawaii's critical infrastructure, focusing particularly on healthcare and transportation sectors. These sectors have faced significant cybersecurity challenges, with Oahu's transportation services experiencing major breaches and healthcare institutions like Queen's Health System and Malama I Ke Ola suffering from ransomware attacks since 2021. These incidents have led to severe disruptions and compromised sensitive data. Hawaii's geographic isolation, natural disaster risks, legacy systems, and workforce shortages exacerbate these issues. Additionally, emerging technologies such as AI and IoT further expand vulnerabilities. A comprehensive cybersecurity strategy is essential to mitigate these risks. This talk introduces the concept of a volunteer-supported Human-AI Synergy Hotline, which provides proactive advice, crisis management, and emotional support during and after cyber incidents. This innovative approach aims to enhance cybersecurity preparedness and resilience in Hawaii's critical sectors.more » « less
- 
            There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences.more » « less
- 
            There is a substantial and ever-growing corpus of evidence and literature exploring the impacts of Artificial intelligence (AI) technologies on society, politics, and humanity as a whole. A separate, parallel body of work has explored existential risks to humanity, including but not limited to that stemming from unaligned Artificial General Intelligence (AGI). In this paper, we problematise the notion that current and near-term artificial intelligence technologies have the potential to contribute to existential risk by acting as intermediate risk factors, and that this potential is not limited to the unaligned AGI scenario. We propose the hypothesis that certain already-documented effects of AI can act as existential risk factors, magnifying the likelihood of previously identified sources of existential risk. Moreover, future developments in the coming decade hold the potential to significantly exacerbate these risk factors, even in the absence of artificial general intelligence. Our main contribution is a (non-exhaustive) exposition of potential AI risk factors and the causal relationships between them, focusing on how AI can affect power dynamics and information security. This exposition demonstrates that there exist causal pathways from AI systems to existential risks that do not presuppose hypothetical future AI capabilities.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    