How do software engineers identify and act on their ethical concerns? Past work examines how software practitioners navigate specific ethical principles such as “fairness”, but this narrows the scope of concerns to implementing pre-specified principles. In contrast, we report self-identified ethical concerns of 115 survey respondents and 21 interviewees across five continents and in non-profit, contractor, and non-tech firms.We enumerate their concerns – military, privacy, advertising, surveillance, and the scope of their concerns – from simple bugs to questioning their industry’s entire existence. We illustrate howattempts to resolve concerns are limited by factors such as personal precarity and organizational incentives. We discuss how even relatively powerful software engineers often lacked the power to resolve their ethical concerns. Our results suggest that ethics interventions must expand from helping practitioners merely identify issues to instead helping them build their (collective) power to resolve them, and that tech ethics discussions may consider broadening beyond foci on AI or Big Tech.
more »
« less
Power and Play: Investigating “License to Critique” in Teams’ AI Ethics Discussions
Past work has sought to design AI ethics interventions–such as checklists or toolkits–to help practitioners design more ethical AI systems. However, other work demonstrates how these interventions may instead serve to limit critique to that addressed within the intervention, while rendering broader concerns illegitimate. In this paper, drawing on work examining how standards enact discursive closure and how power relations affect whether and how people raise critique, we recruit three corporate teams, and one activist team, each with prior context working with one another, to play a game designed to trigger broad discussion around AI ethics. We use this as a point of contrast to trigger reflection on their teams’ past discussions, examining factors which may affect their “license to critique” in AI ethics discussions. We then report on how particular affordances of this game may influence discussion, and find that the hypothetical context created in the game is unlikely to be a viable mechanism for real world change. We discuss how power dynamics within a group and notions of “scope” affect whether people may be willing to raise critique in AI ethics discussions, and discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action.
more »
« less
- Award ID(s):
- 2107298
- PAR ID:
- 10552822
- Publisher / Repository:
- ACM
- Date Published:
- Volume:
- 8
- Issue:
- CSCW2
- ISSN:
- 1573-7551
- ISBN:
- 979-8-4007-0129-0
- Page Range / eLocation ID:
- 399
- Format(s):
- Medium: X
- Location:
- San José, Costa Rica
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper considers the cultivation of ethical identities among future engineers and computer scientists, particularly those whose professional practice will extensively intersect with emerging technologies enabled by artificial intelligence (AI). Many current engineering and computer science students will go on to participate in the development and refinement of AI, machine learning, robotics, and related technologies, thereby helping to shape the future directions of these applications. Researchers have demonstrated the actual and potential deleterious effects that these technologies can have on individuals and communities. Together, these trends present a timely opportunity to steer AI and robotic design in directions that confront, or at least do not extend, patterns of discrimination, marginalization, and exclusion. Examining ethics interventions in AI and robotics education may yield insights into challenges and opportunities for cultivating ethical engineers. We present our ongoing research on engineering ethics education, examine how our work is situated with respect to current AI and robotics applications, and discuss a curricular module in “Robot Ethics” that was designed to achieve interdisciplinary learning objectives. Finally, we offer recommendations for more effective engineering ethics education, with a specific focus on emerging technologies.more » « less
-
How has recent AI Ethics literature addressed topics such as fairness and justice in the context of continued social and structural power asymmetries? We trace both the historical roots and current landmark work that have been shaping the field and categorize these works under three broad umbrellas: (i) those grounded in Western canonical philosophy, (ii) mathematical and statistical methods, and (iii) those emerging from critical data/algorithm/information studies. We also survey the field and explore emerging trends by examining the rapidly growing body of literature that falls under the broad umbrella of AI Ethics. To that end, we read and annotated peer-reviewed papers published over the past four years in two premier conferences: FAccT and AIES. We organize the literature based on an annotation scheme we developed according to three main dimensions: whether the paper deals with concrete applications, use-cases, and/or people’s lived experience; to what extent it addresses harmed, threatened, or otherwise marginalized groups; and if so, whether it explicitly names such groups. We note that although the goals of the majority of FAccT and AIES papers were often commendable, their consideration of the negative impacts of AI on traditionally marginalized groups remained shallow. Taken together, our conceptual analysis and the data from annotated papers indicate that the field would benefit from an increased focus on ethical analysis grounded in concrete use-cases, people’s experiences, and applications as well as from approaches that are sensitive to structural and historical power asymmetries.more » « less
-
*Uncertainty expressions* such as ‘probably’ or ‘highly unlikely’ are pervasive in human language. While prior work has established that there is population-level agreement in terms of how humans quantitatively interpret these expressions, there has been little inquiry into the abilities of language models in the same context. In this paper, we investigate how language models map linguistic expressions of uncertainty to numerical responses. Our approach assesses whether language models can employ theory of mind in this setting: understanding the uncertainty of another agent about a particular statement, independently of the model’s own certainty about that statement. We find that 7 out of 10 models are able to map uncertainty expressions to probabilistic responses in a human-like manner. However, we observe systematically different behavior depending on whether a statement is actually true or false. This sensitivity indicates that language models are substantially more susceptible to bias based on their prior knowledge (as compared to humans). These findings raise important questions and have broad implications for human-AI and AI-AI communication.more » « less
-
null (Ed.)Analysis of municipal wastewater, or sewage for public health applications is a rapidly expanding field aimed at understanding emerging epidemiological trends, including human and disease migration. The newly gained ability to extract and analyze genetic material from wastewater poses important societal and ethical questions, including: How to safeguard data? Who owns genetic data recovered from wastewater? What are the ethical and legal issues surrounding its use? In the U.S., both corporate and legal policies regarding privacy have been historically reactive instead of proactive. In wastewater-based epidemiology (WBE), the pace of innovation has outpaced the ability of social and legal mechanisms to keep up. To address this discrepancy, early and robust discussions of the research, policies, and ethics surrounding WBE analysis and genetics is needed. This paper contributes to this discussion by examining ownership issues for human genetic data recovered from wastewater and the uses to which it may be put. We focus particularly on the risks associated with personally identifiable data, highlighting potential risks, relevant privacy-enhancing technologies, and appropriate ethics. The paper proposes an approach for people conducting WBE studies to help them systematically consider the ethical and privacy implications of their work.more » « less
An official website of the United States government

