skip to main content


Title: Ethical Considerations in Arti cial Intelligence Courses
The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course.  more » « less
Award ID(s):
1646887
PAR ID:
10033556
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
AI magazine
Volume:
38
Issue:
2
ISSN:
1337-7612
Page Range / eLocation ID:
23 - 34
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The recent surge in interest in ethics in artificial intelligence may leave many educators wondering how to address moral, ethical, and philosophical issues in their AI courses. As instructors we want to develop curriculum that not only prepares students to be artificial intelligence practitioners, but also to understand the moral, ethical, and philosophical impacts that artificial intelligence will have on society. In this article we provide practical case studies and links to resources for use by AI educators. We also provide concrete suggestions on how to integrate AI ethics into a general artificial intelligence course and how to teach a stand-alone artificial intelligence ethics course. 
    more » « less
  2. Researchers who use artificial intelligence (AI) and machine learning tools face pressure to pursue “ethical AI,” yet little is known about how researchers enact ethical standards in practice. The author investigates the development of AI ethics using the case of digital psychiatry, a field that uses machine learning to study mental illness and provide mental health care. Drawing on ethnographic research and interviews, the author analyzes how digital psychiatry researchers become “moral entrepreneurs,” actors who wield their social influence to define ethical conduct, through two practices. First, researchers engage in moral discovery, identifying gaps in regulation as opportunities to articulate ethical standards. Second, researchers engage in moral enclosure, specifying a community of people licensed to do moral regulation. With these techniques, digital psychiatry researchers demonstrate ethical innovation is essential to their professional identity. Yet ultimately, the author demonstrates how moral entrepreneurship erects barriers to participation in ethical decision making and constrains the focus of ethical consideration.

     
    more » « less
  3. Dominant approaches to the ethics of artificial intelligence (AI) systems have been mainly based on individualistic, rule-based ethical frameworks central to Western cultures. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, complex contexts of human-AI interactions. Recently there has been an increasing interest among philosophers and computer scientists in building a relational approach to the ethics of AI. This article engages with Daniel A. Bell and Pei Wang’s most recent book Just Hierarchy and explores how their theory of just hierarchy can be employed to develop a more systematic account for relational AI ethics. Bell and Wang’s theory of just hierarchy acknowledges that there are morally justified situations in which social relations are not equal. Just hierarchy can exist both between humans and between humans and machines such as AI systems. Therefore, a relational ethic for AI based on just hierarchy can include two theses: (i) AI systems should be considered merely as tools and their relations with humans are hierarchical (e.g. designing AI systems with lower moral standing than humans); and (ii) the moral assessment of AI systems should focus on whether they help us realize our rolebased moral obligations prescribed by our social relations with others (these relations often involve diverse forms of morally justified hierarchies in communities). Finally, this article will discuss the practical implications of such a relational ethic framework for designing socially integrated and ethically responsive AI systems. 
    more » « less
  4. Abstract Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context. 
    more » « less
  5. The field of machine ethics in the process of designing and developing the computational underpinnings necessary for a robot to make ethical decisions in real-world environments. Yet a key issue faced by machine ethics researchers is the apparent lack of consensus as to the existence and nature of a correct moral theory. Our research seeks to grapple with, and perhaps sidestep, this age-old and ongoing philosophical problem by creating a robot architecture that does not strictly rely on one particular ethical theory. Rather, it would be informed by the insights gleaned from multiple ethical frameworks, perhaps including Kantianism, Utilitarianism, and Ross’s duty based ethical theory, and by moral emotions. Arguably, moral emotions are an integral part of a human’s ethical decision-making process and thus need to be accounted for if robots are to make decisions that roughly approximate how humans navigate through ethically complex circumstances. The aim of this presentation is to discuss the philosophical aspects of our approach. 
    more » « less