skip to main content


Title: The Confucian Matador: Three Defenses Against the Mechanical Bull
It is critical for designers of language-capable robots to enable some degree of moral competence in those robots. This is especially critical at this point in history due to the current research climate, in which much natural language generation research focuses on language modeling techniques whose general approach may be categorized as “fabrication by imitation” (the titular mechanical “bull”), which is especially unsuitable in robotic contexts. Furthermore, it is critical for robot designers seeking to enable moral competence to consider previously under-explored moral frameworks that place greater emphasis than traditional Western frameworks on care, equality, and social justice, as the current sociopolitical climate has seen a rise of movements such as libertarian capitalism that have undermined those societal goals. In this paper we examine one alternate framework for the design of morally competent robots, Confucian ethics, and explore how designers may use this framework to enable morally sensitive human-robot communication through three distinct perspectives: (1) How should a robot reason? (2) What should a robot say? and (3) How should a robot act?  more » « less
Award ID(s):
1909847
NSF-PAR ID:
10173018
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
25 to 33
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. To enable robots to exert positive moral influence, we need to understand the impacts of robots’ moral communications, the ways robots can phrase their moral language to be most clear and persuasive, and the ways that these factors interact. Previous work has suggested, for example, that for certain types of robot moral interventions to be successful (i.e., moral interventions grounded in particular ethical frameworks), those interventions may need to be followed by opportunities for moral reflection, during which humans can critically engage with not only the contents of the robot’s moral language, but also with the way that moral language connects with their social-relational ontology and broader moral ecosystem. We conceptually replicate this prior work (N =119) using a design that more precisely manipulates moral reflection. Our results confirm that opportunities for moral reflection are indeed critical to the success of robotic moral interventions—regardless of the ethical framework in which those interventions are grounded. 
    more » « less
  2. To enable robots to exert positive moral influence, we need to understand the impacts of robots’ moral communications, the ways robots can phrase their moral language to be most clear and persuasive, and the ways that these factors interact. Previous work has suggested, for example, that for certain types of robot moral interventions to be successful (i.e., moral interventions grounded in particular ethical frameworks), those interventions may need to be followed by opportunities for moral reflection, during which humans can critically engage with not only the contents of the robot’s moral language, but also with the way that moral language connects with their social-relational ontology and broader moral ecosystem. We conceptually replicate this prior work (N=119) using a design that more precisely manipulates moral ref lection. Our results confirm that opportunities for moral reflection are indeed critical to the success of robotic moral interventions—regardless of the ethical framework in which those interventions are grounded. 
    more » « less
  3. Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots. 
    more » « less
  4. null (Ed.)
    Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual’s life. These results raise critical new questions regarding people’s moral responses to robots and the design of autonomous moral agents. 
    more » « less
  5. In this work, we present Robots for Social Justice (R4SJ): a framework for an equitable engineering practice of Human-Robot Interaction, grounded in the Engineering for Social Justice (E4SJ) framework for Engineering Education and intended to complement existing frameworks for guiding equitable HRI research. To understand the new insights this framework could provide to the field of HRI, we analyze the past decade of papers published at the ACM/IEEE International Conference on Human-Robot Interaction, and examine how well current HRI research aligns with the principles espoused in the E4SJ framework. Based on the gaps identified through this analysis, we make five concrete recommendations, and highlight key questions that can guide the introspection for engineers, designers, and researchers. We believe these considerations are a necessary step not only to ensure that our engineering education efforts encourage students to engage in equitable and societally beneficial engineering practices (the purpose of E4SJ), but also to ensure that the technical advances we present at conferences like HRI promise true advances to society, and not just to fellow researchers and engineers. 
    more » « less