The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.
more »
« less
Ethical Considerations in Arti cial Intelligence Courses
The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.
more »
« less
- Award ID(s):
- 1646887
- PAR ID:
- 10033556
- Date Published:
- Journal Name:
- AI magazine
- Volume:
- 38
- Issue:
- 2
- ISSN:
- 1337-7612
- Page Range / eLocation ID:
- 23 - 34
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Dominant approaches to the ethics of artificial intelligence (AI) systems have been mainly based on individualistic, rule-based ethical frameworks central to Western cultures. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, complex contexts of human-AI interactions. Recently there has been an increasing interest among philosophers and computer scientists in building a relational approach to the ethics of AI. This article engages with Daniel A. Bell and Pei Wang’s most recent book Just Hierarchy and explores how their theory of just hierarchy can be employed to develop a more systematic account for relational AI ethics. Bell and Wang’s theory of just hierarchy acknowledges that there are morally justified situations in which social relations are not equal. Just hierarchy can exist both between humans and between humans and machines such as AI systems. Therefore, a relational ethic for AI based on just hierarchy can include two theses: (i) AI systems should be considered merely as tools and their relations with humans are hierarchical (e.g. designing AI systems with lower moral standing than humans); and (ii) the moral assessment of AI systems should focus on whether they help us realize our rolebased moral obligations prescribed by our social relations with others (these relations often involve diverse forms of morally justified hierarchies in communities). Finally, this article will discuss the practical implications of such a relational ethic framework for designing socially integrated and ethically responsive AI systems.more » « less
-
The recent surge in artificial intelligence (AI) developments has been met with an increase in attention towards incorporating ethical engagement in machine learning discourse and development. This attention is noticeable within engineering education, where comprehensive ethics curricula are typically absent in engineering programs that train future engineers to develop AI technologies [1]. Artificial intelligence technologies operate as black boxes, presenting both developers and users with a certain level of obscurity concerning their decision-making processes and a diminished potential for negotiating with its outputs [2]. The implementation of collaborative and reflective learning has the potential to engage students with facets of ethical awareness that go along with algorithmic decision making – such as bias, security, transparency and other ethical and moral dilemmas. However, there are few studies that examine how students learn AI ethics in electrical and computer engineering courses. This paper explores the integration of STEMtelling, a pedagogical storytelling method/sensibility, into an undergraduate machine learning course. STEMtelling is a novel approach that invites participants (STEMtellers) to center their own interests and experiences through writing and sharing engineering stories (STEMtells) that are connected to course objectives. Employing a case study approach grounded in activity theory, we explore how students learn ethical awareness that is intrinsic to being an engineer. During the STEMtelling process, STEMtellers blur the boundaries between social and technical knowledge to place themselves at the center of knowledge production. In this WIP, we discuss algorithmic awareness, as one of the themes identified as a practice in developing ethical awareness of AI through STEMtelling. Findings from this study will be incorporated into the development of STEMtelling and address challenges of integrating ethics and the social perception of AI and machine learning courses.more » « less
-
Abstract Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context.more » « less
-
The field of machine ethics in the process of designing and developing the computational underpinnings necessary for a robot to make ethical decisions in real-world environments. Yet a key issue faced by machine ethics researchers is the apparent lack of consensus as to the existence and nature of a correct moral theory. Our research seeks to grapple with, and perhaps sidestep, this age-old and ongoing philosophical problem by creating a robot architecture that does not strictly rely on one particular ethical theory. Rather, it would be informed by the insights gleaned from multiple ethical frameworks, perhaps including Kantianism, Utilitarianism, and Ross’s duty-based ethical theory, and by moral emotions. Arguably, moral emotions are an integral part of a human’s ethical decision-making process and thus need to be accounted for if robots are to make decisions that roughly approximate how humans navigate through ethically complex circumstances. The aim of this presentation is to discuss the philosophical aspects of our approach.more » « less
An official website of the United States government

