skip to main content


Title: Blame-Laden Moral Rebukes and the Morally Competent Robot: A Confucian Ethical Perspective
Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots.  more » « less
Award ID(s):
1909847
NSF-PAR ID:
10173019
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Science and Engineering Ethics
ISSN:
1353-3452
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ess, Charles (Ed.)
    Dominant approaches to designing morally capable robots have been mainly based on rule-based ethical frameworks such as deontology and consequentialism. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, and complex contexts of human-robot interaction. Roboticists and philosophers have recently been exploring underrepresented ethical traditions such as virtuous, role-based, and relational ethical frameworks for designing morally capable robots. This paper employs the lens of ethical pluralism to examine the notion of role-based morality in the global context and discuss how such cross-cultural analysis of role ethics can inform the design of morally competent robots. In doing so, it first provides a concise introduction to ethical pluralism and how it has been employed as a method to interpret issues in computer and information ethics. Second, it reviews specific schools of thought in Western ethics that derive morality from role-based obligations. Third, it presents a more recent effort in Confucianism to reconceptualize Confucian ethics as a role-based ethic. This paper then compares the shared norms and irreducible differences between Western and Eastern approaches to role ethics. Finally, it discusses how such examination of pluralist views of role ethics across cultures can be conducive to the design of morally capable robots sensitive to diverse value systems in the global context. 
    more » « less
  2. It is critical for designers of language-capable robots to enable some degree of moral competence in those robots. This is especially critical at this point in history due to the current research climate, in which much natural language generation research focuses on language modeling techniques whose general approach may be categorized as “fabrication by imitation” (the titular mechanical “bull”), which is especially unsuitable in robotic contexts. Furthermore, it is critical for robot designers seeking to enable moral competence to consider previously under-explored moral frameworks that place greater emphasis than traditional Western frameworks on care, equality, and social justice, as the current sociopolitical climate has seen a rise of movements such as libertarian capitalism that have undermined those societal goals. In this paper we examine one alternate framework for the design of morally competent robots, Confucian ethics, and explore how designers may use this framework to enable morally sensitive human-robot communication through three distinct perspectives: (1) How should a robot reason? (2) What should a robot say? and (3) How should a robot act? 
    more » « less
  3. null (Ed.)
    Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual’s life. These results raise critical new questions regarding people’s moral responses to robots and the design of autonomous moral agents. 
    more » « less
  4. null (Ed.)
    Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications. 
    more » « less
  5. null (Ed.)
    Because robots are perceived as moral agents, they hold significant persuasive power over humans. It is thus crucial for robots to behave in accordance with human systems of morality and to use effective strategies for human-robot moral communication. In this work, we evaluate two moral communication strategies: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics, in order to test the effectiveness of these two strategies in encouraging compliance with norms grounded in role expectations. Our results suggest two major findings: (1) reflective exercises may increase the efficacy of role-based moral language and (2) opportunities for moral practice following robots’ use of moral language may facilitate role-centered moral cultivation. 
    more » « less