skip to main content

Title: Toward Ethical Robotic Behavior in Human-Robot Interaction Scenarios
This paper describes current progress on developing an ethical architecture for robots that are designed to follow human ethical decision-making processes. We surveyed both regular adults (folks) and ethics experts (experts) on what they consider to be ethical behavior in two specific scenarios: pill-sorting with an older adult and game playing with a child. A key goal of the surveys is to better understand human ethical decision-making. In the first survey, folk responses were based on the subject’s ethical choices (“folk morality”); in the second survey, expert responses were based on the expert’s application of different formal ethical frameworks to each scenario. We observed that most of the formal ethical frameworks we included in the survey (Utilitarianism, Kantian Ethics, Ethics of Care and Virtue Ethics) and “folk morality” were conservative toward deception in the high-risk task with an older adult when both the adult and the child had significant performance deficiencies.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
The Road to a successful HRI: AI, Trust and ethicS - TRAITS Workshop @HRI 2022
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ethical decision-making is difficult, certainly for robots let alone humans. If a robot's ethical decision-making process is going to be designed based on some approximation of how humans operate, then the assumption is that a good model of how humans make an ethical choice is readily available. Yet no single ethical framework seems sufficient to capture the diversity of human ethical decision making. Our work seeks to develop the computational underpinnings that will allow a robot to use multiple ethical frameworks that guide it towards doing the right thing. As a step towards this goal, we have collected data investigating how regular adults and ethics experts approach ethical decisions related to the use in a healthcare and game playing scenario. The decisions made by the former group is intended to represent an approximation of a folk morality approach to these dilemmas. On the other hand, experts were asked to judge what decision would result if a person was using one of several different types of ethical frameworks. The resulting data may reveal which features of the pill sorting and game playing scenarios contribute to similarities and differences between expert and non-expert responses. This type of approach to programming a robot may one day be able to rely on specific features of an interaction to determine which ethical framework to use in the robot's decision making. 
    more » « less
  2. Our research team has been investigating methods for enabling robots to behave ethically while interacting with human beings. Our approach relies on two main sources of data for determining what counts as “ethical” behavior. The first are the views of average adults, which we refer to “folk morality”, and the second are the views of ethics experts. Yet the enterprise of identifying what should ground a robot’s decisions about ethical matters raises many fundamental metaethical questions. Here, we focus on one main metaethical question: would reason dedicate that it is more justifiable to base a robot’s decisions on folk morality or the guidance of ethics experts? The goal of this presentation is to highlight some of the arguments for and against each respective point of view, and the implications such arguments might have for the endeavor to encode ethical decision-making processes into robots. 
    more » « less
  3. This Innovative Practice Full Paper presents a novel, narrative, game-based approach to introducing first-year engineering students to concepts in ethical decision making. Approximately 250 first-year engineering students at the University of Connecticut played through our adventure, titled Mars: An Ethical Expedition, by voting weekly as a class on a presented dilemma. Literature shows that case studies still dominate learning sciences research on engineering ethical education, and that novel, active learning-based techniques, such as games, are infrequently used but can have a positive impact on both student engagement and learning. In this work, we suggest that games are a form of situated (context-based) learning, where the game setting provides learners with an authentic but safe space in which to explore engineering ethical choices and their consequences. As games normalize learning through failure, they present a unique opportunity for students to explore ethical decision making in a non-judgmental, playful, and safe way.We explored the situated nature of ethical decision making through a qualitative deconstruction of the weekly scenarios that students engaged with over the course of the twelve-week narrative. To assess their ethical reasoning, students took the Engineering Ethics Reasoning Instrument (EERI), a quantitative engineering ethics reasoning survey, at the beginning and end of the semester. The EERI scenarios were deconstructed to reveal their core ethical dilemmas, and then common elements between the EERI and our Mars adventure were compared to determine how students responded to similar ethical dilemmas presented in each context.We noted that students' responses to the ethical decisions in the Mars adventure scenarios were sometimes substantially different both from their response to the EERI scenario as well as from other decisions they made within the context of the game, despite the core ethical dilemma being the same. This suggests that they make ethical decisions in some situations that differ from a presumed abstract understanding of post-conventional moral reasoning. This has implications for how ethical reasoning can be taught and scaffolded in educational settings. 
    more » « less
  4. Ess, Charles (Ed.)
    Dominant approaches to designing morally capable robots have been mainly based on rule-based ethical frameworks such as deontology and consequentialism. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, and complex contexts of human-robot interaction. Roboticists and philosophers have recently been exploring underrepresented ethical traditions such as virtuous, role-based, and relational ethical frameworks for designing morally capable robots. This paper employs the lens of ethical pluralism to examine the notion of role-based morality in the global context and discuss how such cross-cultural analysis of role ethics can inform the design of morally competent robots. In doing so, it first provides a concise introduction to ethical pluralism and how it has been employed as a method to interpret issues in computer and information ethics. Second, it reviews specific schools of thought in Western ethics that derive morality from role-based obligations. Third, it presents a more recent effort in Confucianism to reconceptualize Confucian ethics as a role-based ethic. This paper then compares the shared norms and irreducible differences between Western and Eastern approaches to role ethics. Finally, it discusses how such examination of pluralist views of role ethics across cultures can be conducive to the design of morally capable robots sensitive to diverse value systems in the global context. 
    more » « less
  5. null (Ed.)
    A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts. 
    more » « less