skip to main content


Title: Robots as Moral Advisors: The Effects of Deontological, Virtue, and Confucian Role Ethics on Encouraging Honest Behavior
We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot’s advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty.  more » « less
Award ID(s):
1909847
NSF-PAR ID:
10265910
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
10 to 18
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We examined whether a robot that proactively offers moral advice promoting the norm of honesty can discourage people from cheating. Participants were presented with an opportunity to cheat in a die-rolling game. Prior to playing the game, participants received from either a NAO robot or a human, a piece of moral advice grounded in either deontological, virtue, or Confucian role ethics, or did not receive any advice. We found that moral advice grounded in Confucian role ethics could reduce cheating when the advice was delivered by a human. No advice was effective when a robot delivered moral advice. These findings highlight challenges in building robots that can possibly guide people to follow moral norms. 
    more » « less
  2. Because robots are perceived as moral agents, they must behave in accordance with human systems of morality. This responsibility is especially acute for language-capable robots because moral communication is a method for building moral ecosystems. Language capable robots must not only make sure that what they say adheres to moral norms; they must also actively engage in moral communication to regulate and encourage human compliance with those norms. In this work, we describe four experiments (total N =316) across which we systematically evaluate two different moral communication strategies that robots could use to influence human behavior: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics. Specifically, we assess the effectiveness of robots that use these two strategies to encourage human compliance with norms grounded in expectations of behavior associated with certain social roles. Our results suggest two major findings, demonstrating the importance of moral reflection and moral practice for effective moral communication: First, opportunities for reflection on ethical principles may increase the efficacy of robots’ role-based moral language; and second, following robots’ moral language with opportunities for moral practice may facilitate role-based moral cultivation. 
    more » « less
  3. null (Ed.)
    Because robots are perceived as moral agents, they hold significant persuasive power over humans. It is thus crucial for robots to behave in accordance with human systems of morality and to use effective strategies for human-robot moral communication. In this work, we evaluate two moral communication strategies: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics, in order to test the effectiveness of these two strategies in encouraging compliance with norms grounded in role expectations. Our results suggest two major findings: (1) reflective exercises may increase the efficacy of role-based moral language and (2) opportunities for moral practice following robots’ use of moral language may facilitate role-centered moral cultivation. 
    more » « less
  4. The field of machine ethics in the process of designing and developing the computational underpinnings necessary for a robot to make ethical decisions in real-world environments. Yet a key issue faced by machine ethics researchers is the apparent lack of consensus as to the existence and nature of a correct moral theory. Our research seeks to grapple with, and perhaps sidestep, this age-old and ongoing philosophical problem by creating a robot architecture that does not strictly rely on one particular ethical theory. Rather, it would be informed by the insights gleaned from multiple ethical frameworks, perhaps including Kantianism, Utilitarianism, and Ross’s duty based ethical theory, and by moral emotions. Arguably, moral emotions are an integral part of a human’s ethical decision-making process and thus need to be accounted for if robots are to make decisions that roughly approximate how humans navigate through ethically complex circumstances. The aim of this presentation is to discuss the philosophical aspects of our approach. 
    more » « less
  5. The field of machine ethics in the process of designing and developing the computational underpinnings necessary for a robot to make ethical decisions in real-world environments. Yet a key issue faced by machine ethics researchers is the apparent lack of consensus as to the existence and nature of a correct moral theory. Our research seeks to grapple with, and perhaps sidestep, this age-old and ongoing philosophical problem by creating a robot architecture that does not strictly rely on one particular ethical theory. Rather, it would be informed by the insights gleaned from multiple ethical frameworks, perhaps including Kantianism, Utilitarianism, and Ross’s duty-based ethical theory, and by moral emotions. Arguably, moral emotions are an integral part of a human’s ethical decision-making process and thus need to be accounted for if robots are to make decisions that roughly approximate how humans navigate through ethically complex circumstances. The aim of this presentation is to discuss the philosophical aspects of our approach. 
    more » « less