skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Reconfiguring Diversity and Inclusion for AI Ethics
Activists, journalists, and scholars have long raised critical questions about the relationship between diversity, representation, and structural exclusions in data-intensive tools and services. We build on work mapping the emergent landscape of corporate AI ethics to center one outcome of these conversations: the incorporation of diversity and inclusion in corporate AI ethics activities. Using interpretive document analysis and analytic tools from the values in design field, we examine how diversity and inclusion work is articulated in public-facing AI ethics documentation produced by three companies that create application and services layer AI infrastructure: Google, Microsoft, and Salesforce. We find that as these documents make diversity and inclusion more tractable to engineers and technical clients, they reveal a drift away from civil rights justifications that resonates with the “managerialization of diversity” by corporations in the mid-1980s. The focus on technical artifacts — such as diverse and inclusive datasets — and the replacement of equity with fairness make ethical work more actionable for everyday practitioners. Yet, they appear divorced from broader DEI initiatives and relevant subject matter experts that could provide needed context to nuanced decisions around how to operationalize these values and new solutions. Finally, diversity and inclusion, as configured by engineering logic, positions firms not as “ethics owners” but as ethics allocators; while these companies claim expertise on AI ethics, the responsibility of defining who diversity and inclusion are meant to protect and where it is relevant is pushed downstream to their customers.  more » « less
Award ID(s):
1835261
PAR ID:
10322862
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract There is growing consensus that teaching computer ethics is important, but there is little consensus on how to do so. One unmet challenge is increasing the capacity of computing students to make decisions about the ethical challenges embedded in their technical work. This paper reports on the design, testing, and evaluation of an educational simulation to meet this challenge. The privacy by design simulation enables more relevant and effective computer ethics education by letting students experience and make decisions about common ethical challenges encountered in real-world work environments. This paper describes the process of incorporating empirical observations of ethical questions in computing into an online simulation and an in-person board game. We employed the Values at Play framework to transform empirical observations of design into a playable educational experience. First, we conducted qualitative research to discover when and how values levers—practices that encourage values discussions during technology development—occur during the design of new mobile applications. We then translated these findings into gameplay elements, including the goals, roles, and elements of surprise incorporated into a simulation. We ran the online simulation in five undergraduate computer and information science classes. Based on this experience, we created a more accessible board game, which we tested in two undergraduate classes and two professional workshops. We evaluated the effectiveness of both the online simulation and the board game using two methods: a pre/post-test of moral sensitivity based on the Defining Issues Test, and a questionnaire evaluating student experience. We found that converting real-world ethical challenges into a playable simulation increased student’s reported interest in ethical issues in technology, and that students identified the role-playing activity as relevant to their technical coursework. This demonstrates that roleplaying can emphasize ethical decision-making as a relevant component of technical work. 
    more » « less
  2. Past work has sought to design AI ethics interventions–such as checklists or toolkits–to help practitioners design more ethical AI systems. However, other work demonstrates how these interventions may instead serve to limit critique to that addressed within the intervention, while rendering broader concerns illegitimate. In this paper, drawing on work examining how standards enact discursive closure and how power relations affect whether and how people raise critique, we recruit three corporate teams, and one activist team, each with prior context working with one another, to play a game designed to trigger broad discussion around AI ethics. We use this as a point of contrast to trigger reflection on their teams’ past discussions, examining factors which may affect their “license to critique” in AI ethics discussions. We then report on how particular affordances of this game may influence discussion, and find that the hypothetical context created in the game is unlikely to be a viable mechanism for real world change. We discuss how power dynamics within a group and notions of “scope” affect whether people may be willing to raise critique in AI ethics discussions, and discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action. 
    more » « less
  3. User experience (UX) professionals' attempts to address social values as a part of their work practice can overlap with tactics to contest, resist, or change the companies they work for. This paper studies tactics that take place in this overlap, where UX professionals try to re-shape the values embodied and promoted by their companies, in addition to the values embodied and promoted in the technical systems and products that their companies produce. Through interviews with UX professionals working at large U.S.-based technology companies and observations at UX meetup events, this paper identifies tactics used towards three goals: (1) creating space for UX expertise to address values; (2) making values visible and relevant to other organizational stakeholders; and (3) changing organizational processes and orientations towards values. This paper analyzes these as tactics of resistance: UX professionals seek to subvert or change existing practices and organizational structures towards more values-conscious ends. Yet, these tactics of resistance often rely on the dominant discourses and logics of the technology industry. The paper characterizes these as partial or soft tactics, but also argues that they nevertheless hold possibilities for enacting values-oriented changes. 
    more » « less
  4. null (Ed.)
    A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts. 
    more » « less
  5. The rapid expansion of Artificial Intelligence (AI) necessitates a need for educating students to become knowledgeable of AI and aware of its interrelated technical, social, and human implications. The latter (ethics) is particularly important to K-12 students because they may have been interacting with AI through everyday technology without realizing it. They may be targeted by AI generated fake content on social media and may have been victims of algorithm bias in AI applications of facial recognition and predictive policing. To empower students to recognize ethics related issues of AI, this paper reports the design and implementation of a suite of ethics activities embedded in the Developing AI Literacy (DAILy) curriculum. These activities engage students in investigating bias of existing technologies, experimenting with ways to mitigate potential bias, and redesigning the YouTube recommendation system in order to understand different aspects of AI-related ethics issues. Our observations of implementing these lessons among adolescents and exit interviews show that students were highly engaged and became aware of potential harms and consequences of AI tools in everyday life after these ethics lessons. 
    more » « less