skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Forgotten Margins of AI Ethics
How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries? We trace both the historical roots and current landmark work that have been shaping the field and categorize these works under three broad umbrellas: (i) those grounded in Western canonical philosophy, (ii) mathematical and statistical methods, and (iii) those emerging from critical data/algorithm/information studies. We also survey the field and explore emerging trends by examining the rapidly growing body of literature that falls under the broad umbrella of AI Ethics. To that end, we read and annotated peer-reviewed papers published over the past four years in two premier conferences: FAccT and AIES. We organize the literature based on an annotation scheme we developed according to three main dimensions: whether the paper deals with concrete applications, use-cases, and/or people’s lived experience; to what extent it addresses harmed, threatened, or otherwise marginalized groups; and if so, whether it explicitly names such groups. We note that although the goals of the majority of FAccT and AIES papers were often commendable, their consideration of the negative impacts of AI on traditionally marginalized groups remained shallow. Taken together, our conceptual analysis and the data from annotated papers indicate that the field would benefit from an increased focus on ethical analysis grounded in concrete use-cases, people’s experiences, and applications as well as from approaches that are sensitive to structural and historical power asymmetries.  more » « less
Award ID(s):
2218226
PAR ID:
10407217
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
2022 ACM Conference on Fairness, Accountability, and Transparency
Page Range / eLocation ID:
948 to 958
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society’s most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench “bias,” are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI’s long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society. 
    more » « less
  2. Past work has sought to design AI ethics interventions–such as checklists or toolkits–to help practitioners design more ethical AI systems. However, other work demonstrates how these interventions may instead serve to limit critique to that addressed within the intervention, while rendering broader concerns illegitimate. In this paper, drawing on work examining how standards enact discursive closure and how power relations affect whether and how people raise critique, we recruit three corporate teams, and one activist team, each with prior context working with one another, to play a game designed to trigger broad discussion around AI ethics. We use this as a point of contrast to trigger reflection on their teams’ past discussions, examining factors which may affect their “license to critique” in AI ethics discussions. We then report on how particular affordances of this game may influence discussion, and find that the hypothetical context created in the game is unlikely to be a viable mechanism for real world change. We discuss how power dynamics within a group and notions of “scope” affect whether people may be willing to raise critique in AI ethics discussions, and discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action. 
    more » « less
  3. null (Ed.)
    In this experience report, we describe an AI summer workshop designed to prepare middle school students to become informed citizens and critical consumers of AI technology and to develop their foundational knowledge and skills to support future endeavors as AI-empowered workers. The workshop featured the 30-hour "Developing AI Literacy" or DAILy curriculum that is grounded in literature on child development, ethics education, and career development. The participants in the workshop were students between the ages of 10 and 14; 87% were from underrepresented groups in STEM and Computing. In this paper we describe the online curriculum, its implementation during synchronous online workshop sessions in summer of 2020, and preliminary findings on student outcomes. We reflect on the successes and lessons we learned in terms of supporting students' engagement and conceptual learning of AI, shifting attitudes toward AI, and fostering conceptions of future selves as AI-enabled workers. We conclude with discussions of the affordances and barriers to bringing AI education to students from underrepresented groups in STEM and Computing. 
    more » « less
  4. Abstract Understanding motivations underlying acts of hatred are essential for developing strategies to prevent such extreme behavioral expressions of prejudice (EBEPs) against marginalized groups. In this work, we investigate the motivations underlying EBEPs as a function of moral values. Specifically, we propose EBEPs may often be best understood as morally motivated behaviors grounded in people’s moral values and perceptions of moral violations. As evidence, we report five studies that integrate spatial modeling and experimental methods to investigate the relationship between moral values and EBEPs. Our results, from these U.S. based studies, suggest that moral values oriented around group preservation are predictive of the county-level prevalence of hate groups and associated with the belief that extreme behavioral expressions of prejudice against marginalized groups are justified. Additional analyses suggest that the association between group-based moral values and EBEPs against outgroups can be partly explained by the belief that these groups have done something morally wrong. 
    more » « less
  5. The computing education research community now has at least 40 years of published research on teaching ethics in higher education. To examine the state of our field, we present a systematic literature review of papers in the Association for Computing Machinery computing education venues that describe teaching ethics in higher-education computing courses. Our review spans all papers published to SIGCSE, ICER, ITiCSE, CompEd, Koli Calling, and TOCE venues through 2022, with 100 papers fulfilling our inclusion criteria. Overall, we found a wide variety in content, teaching strategies, challenges, and recommendations. The majority of the papers did not articulate a conception of “ethics,” and those that did used many different conceptions, from broadly applicable ethical theories to social impact to specific computing application areas (e.g., data privacy and hacking). Instructors used many different pedagogical strategies (e.g., discussions, lectures, assignments) and formats (e.g., stand-alone courses, incorporated within a technical course). Many papers identified measuring student knowledge as a particular challenge, and 59% of papers included mention of assessments or grading. Of the 69% of papers that evaluated their ethics instruction, most used student self-report surveys, course evaluations, and instructor reflections. While many papers included calls for more ethics content in computing, specific recommendations were rarely broadly applicable, preventing a synthesis of guidelines. To continue building on the last 40 years of research and move toward a set of best practices for teaching ethics in computing, our community should delineate our varied conceptions of ethics, examine which teaching strategies are best suited for each, and explore how to measure student learning. 
    more » « less