Dominant approaches to the ethics of artificial intelligence (AI) systems have been mainly based on individualistic, rule-based ethical frameworks central to Western cultures. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, complex contexts of human-AI interactions. Recently there has been an increasing interest among philosophers and computer scientists in building a relational approach to the ethics of AI. This article engages with Daniel A. Bell and Pei Wang’s most recent book Just Hierarchy and explores how their theory of just hierarchy can be employed to develop a more systematic account for relational AI ethics. Bell and Wang’s theory of just hierarchy acknowledges that there are morally justified situations in which social relations are not equal. Just hierarchy can exist both between humans and between humans and machines such as AI systems. Therefore, a relational ethic for AI based on just hierarchy can include two theses: (i) AI systems should be considered merely as tools and their relations with humans are hierarchical (e.g. designing AI systems with lower moral standing than humans); and (ii) the moral assessment of AI systems should focus on whether they help us realize our rolebased moral obligations prescribed by our social relations with others (these relations often involve diverse forms of morally justified hierarchies in communities). Finally, this article will discuss the practical implications of such a relational ethic framework for designing socially integrated and ethically responsive AI systems.
more »
« less
Role-based Morality, Ethical Pluralism, and Morally Capable Robots1
Dominant approaches to designing morally capable robots have been mainly based on rule-based ethical frameworks such as deontology and consequentialism. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, and complex contexts of human-robot interaction. Roboticists and philosophers have recently been exploring underrepresented ethical traditions such as virtuous, role-based, and relational ethical frameworks for designing morally capable robots. This paper employs the lens of ethical pluralism to examine the notion of role-based morality in the global context and discuss how such cross-cultural analysis of role ethics can inform the design of morally competent robots. In doing so, it first provides a concise introduction to ethical pluralism and how it has been employed as a method to interpret issues in computer and information ethics. Second, it reviews specific schools of thought in Western ethics that derive morality from role-based obligations. Third, it presents a more recent effort in Confucianism to reconceptualize Confucian ethics as a role-based ethic. This paper then compares the shared norms and irreducible differences between Western and Eastern approaches to role ethics. Finally, it discusses how such examination of pluralist views of role ethics across cultures can be conducive to the design of morally capable robots sensitive to diverse value systems in the global context.
more »
« less
- Award ID(s):
- 1909847
- PAR ID:
- 10265897
- Editor(s):
- Ess, Charles
- Date Published:
- Journal Name:
- Journal of contemporary Eastern Asia
- ISSN:
- 2383-9449
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
It is critical for designers of language-capable robots to enable some degree of moral competence in those robots. This is especially critical at this point in history due to the current research climate, in which much natural language generation research focuses on language modeling techniques whose general approach may be categorized as “fabrication by imitation” (the titular mechanical “bull”), which is especially unsuitable in robotic contexts. Furthermore, it is critical for robot designers seeking to enable moral competence to consider previously under-explored moral frameworks that place greater emphasis than traditional Western frameworks on care, equality, and social justice, as the current sociopolitical climate has seen a rise of movements such as libertarian capitalism that have undermined those societal goals. In this paper we examine one alternate framework for the design of morally competent robots, Confucian ethics, and explore how designers may use this framework to enable morally sensitive human-robot communication through three distinct perspectives: (1) How should a robot reason? (2) What should a robot say? and (3) How should a robot act?more » « less
-
This paper describes current progress on developing an ethical architecture for robots that are designed to follow human ethical decision-making processes. We surveyed both regular adults (folks) and ethics experts (experts) on what they consider to be ethical behavior in two specific scenarios: pill-sorting with an older adult and game playing with a child. A key goal of the surveys is to better understand human ethical decision-making. In the first survey, folk responses were based on the subject’s ethical choices (“folk morality”); in the second survey, expert responses were based on the expert’s application of different formal ethical frameworks to each scenario. We observed that most of the formal ethical frameworks we included in the survey (Utilitarianism, Kantian Ethics, Ethics of Care and Virtue Ethics) and “folk morality” were conservative toward deception in the high-risk task with an older adult when both the adult and the child had significant performance deficiencies.more » « less
-
Because robots are perceived as moral agents, they must behave in accordance with human systems of morality. This responsibility is especially acute for language-capable robots because moral communication is a method for building moral ecosystems. Language capable robots must not only make sure that what they say adheres to moral norms; they must also actively engage in moral communication to regulate and encourage human compliance with those norms. In this work, we describe four experiments (total N =316) across which we systematically evaluate two different moral communication strategies that robots could use to influence human behavior: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics. Specifically, we assess the effectiveness of robots that use these two strategies to encourage human compliance with norms grounded in expectations of behavior associated with certain social roles. Our results suggest two major findings, demonstrating the importance of moral reflection and moral practice for effective moral communication: First, opportunities for reflection on ethical principles may increase the efficacy of robots’ role-based moral language; and second, following robots’ moral language with opportunities for moral practice may facilitate role-based moral cultivation.more » « less
-
Our research team has been investigating methods for enabling robots to behave ethically while interacting with human beings. Our approach relies on two main sources of data for determining what counts as “ethical” behavior. The first are the views of average adults, which we refer to “folk morality”, and the second are the views of ethics experts. Yet the enterprise of identifying what should ground a robot’s decisions about ethical matters raises many fundamental metaethical questions. Here, we focus on one main metaethical question: would reason dedicate that it is more justifiable to base a robot’s decisions on folk morality or the guidance of ethics experts? The goal of this presentation is to highlight some of the arguments for and against each respective point of view, and the implications such arguments might have for the endeavor to encode ethical decision-making processes into robots.more » « less
An official website of the United States government

