skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures Through AI Systems
Abstract The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum’s capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningfulbenefitorassistanceto stakeholders. Such systems enhance stakeholders’ ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally permissible interactions between AI systems and those impacted by their functioning, and two sufficient conditions for realizing the ideal of meaningful benefit. We then contrast this ideal with several salient failure modes, namely, forms of social interactions that constitute unjustified paternalism, coercion, deception, exploitation and domination. The proliferation of incidents involving AI in high-stakes domains underscores the gravity of these issues and the imperative to take an ethics-led approach to AI systems from their inception.  more » « less
Award ID(s):
2040929
PAR ID:
10544791
Author(s) / Creator(s):
;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Minds and Machines
Volume:
34
Issue:
4
ISSN:
1572-8641
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The explosive growth of artificial intelligence (AI) over the past few years has focused attention on how diverse stakeholders regulate these technologies to ensure their safe and ethical use. Increasingly, governmental bodies, corporations, and nonprofit organizations are developing strategies and policies for AI governance. While existing literature on ethical AI has focused on the various principles and guidelines that have emerged as a result of these efforts, just how these principles are operationalized and translated to broader policy is still the subject of current research. Specifically, there is a gap in our understanding of how policy practitioners actively engage with, contextualize, or reflect on existing AI ethics policies in their daily professional activities. The perspectives of these policy experts towards AI regulation generally are not fully understood. To this end, this paper explores the perceptions of scientists and engineers in policy-related roles in the US public and nonprofit sectors towards AI ethics policy, both in the US and abroad. We interviewed 15 policy experts and found that although these experts were generally familiar with AI governance efforts within their domains, overall knowledge of guiding frameworks and critical regulatory policies was still limited. There was also a general perception among the experts we interviewed that the US lagged behind other comparable countries in regulating AI, a finding that supports the conclusion of existing literature. Lastly, we conducted a preliminary comparison between the AI ethics policies identified by the policy experts in our study and those emphasized in existing literature, identifying both commonalities and areas of divergence. 
    more » « less
  2. Dominant approaches to the ethics of artificial intelligence (AI) systems have been mainly based on individualistic, rule-based ethical frameworks central to Western cultures. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, complex contexts of human-AI interactions. Recently there has been an increasing interest among philosophers and computer scientists in building a relational approach to the ethics of AI. This article engages with Daniel A. Bell and Pei Wang’s most recent book Just Hierarchy and explores how their theory of just hierarchy can be employed to develop a more systematic account for relational AI ethics. Bell and Wang’s theory of just hierarchy acknowledges that there are morally justified situations in which social relations are not equal. Just hierarchy can exist both between humans and between humans and machines such as AI systems. Therefore, a relational ethic for AI based on just hierarchy can include two theses: (i) AI systems should be considered merely as tools and their relations with humans are hierarchical (e.g. designing AI systems with lower moral standing than humans); and (ii) the moral assessment of AI systems should focus on whether they help us realize our rolebased moral obligations prescribed by our social relations with others (these relations often involve diverse forms of morally justified hierarchies in communities). Finally, this article will discuss the practical implications of such a relational ethic framework for designing socially integrated and ethically responsive AI systems. 
    more » « less
  3. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less
  4. The recent surge in artificial intelligence (AI) developments has been met with an increase in attention towards incorporating ethical engagement in machine learning discourse and development. This attention is noticeable within engineering education, where comprehensive ethics curricula are typically absent in engineering programs that train future engineers to develop AI technologies [1]. Artificial intelligence technologies operate as black boxes, presenting both developers and users with a certain level of obscurity concerning their decision-making processes and a diminished potential for negotiating with its outputs [2]. The implementation of collaborative and reflective learning has the potential to engage students with facets of ethical awareness that go along with algorithmic decision making – such as bias, security, transparency and other ethical and moral dilemmas. However, there are few studies that examine how students learn AI ethics in electrical and computer engineering courses. This paper explores the integration of STEMtelling, a pedagogical storytelling method/sensibility, into an undergraduate machine learning course. STEMtelling is a novel approach that invites participants (STEMtellers) to center their own interests and experiences through writing and sharing engineering stories (STEMtells) that are connected to course objectives. Employing a case study approach grounded in activity theory, we explore how students learn ethical awareness that is intrinsic to being an engineer. During the STEMtelling process, STEMtellers blur the boundaries between social and technical knowledge to place themselves at the center of knowledge production. In this WIP, we discuss algorithmic awareness, as one of the themes identified as a practice in developing ethical awareness of AI through STEMtelling. Findings from this study will be incorporated into the development of STEMtelling and address challenges of integrating ethics and the social perception of AI and machine learning courses. 
    more » « less
  5. Abstract: In this article, we present an educational intervention that embeds ethics education within research laboratories. This structure is designed to assist students in addressing ethical challenges in a more informed way, and to improve the overall ethical culture of research environments. The project seeks (a) to identify factors that students and researchers consider relevant to ethical conduct in science, technology, engineering, and math (STEM) and (b) to promote the cultivation of an ethical culture in experimental laboratories by integrating research stakeholders in a bottom-up approach to developing context-specific, ethics-based guidelines. An important assumption behind this approach is that direct involvement in the process of developing laboratory specific ethical guidelines will positively influence researchers’ understanding of ethical research and practice issues, their handling of these issues, and the promotion of an ethical culture in the respective laboratory. The active involvement may increase the sense of ownership and integration of further discussion on these important topics. Based on the project experiences, the project team seeks to develop a module involving the bottom-up building of codes-of-ethics-based guidelines that can be used by a broad range of institutions and that will be distributed widely. 
    more » « less