skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, June 13 until 2:00 AM ET on Friday, June 14 due to maintenance. We apologize for the inconvenience.


Title: Informing a Robot Ethics Architecture through Folk and Expert Morality
Ethical decision-making is difficult, certainly for robots let alone humans. If a robot's ethical decision-making process is going to be designed based on some approximation of how humans operate, then the assumption is that a good model of how humans make an ethical choice is readily available. Yet no single ethical framework seems sufficient to capture the diversity of human ethical decision making. Our work seeks to develop the computational underpinnings that will allow a robot to use multiple ethical frameworks that guide it towards doing the right thing. As a step towards this goal, we have collected data investigating how regular adults and ethics experts approach ethical decisions related to the use in a healthcare and game playing scenario. The decisions made by the former group is intended to represent an approximation of a folk morality approach to these dilemmas. On the other hand, experts were asked to judge what decision would result if a person was using one of several different types of ethical frameworks. The resulting data may reveal which features of the pill sorting and game playing scenarios contribute to similarities and differences between expert and non-expert responses. This type of approach to programming a robot may one day be able to rely on specific features of an interaction to determine which ethical framework to use in the robot's decision making.  more » « less
Award ID(s):
1848974
NSF-PAR ID:
10394584
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
7th International Conference on Robot Ethics and Standards
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper describes current progress on developing an ethical architecture for robots that are designed to follow human ethical decision-making processes. We surveyed both regular adults (folks) and ethics experts (experts) on what they consider to be ethical behavior in two specific scenarios: pill-sorting with an older adult and game playing with a child. A key goal of the surveys is to better understand human ethical decision-making. In the first survey, folk responses were based on the subject’s ethical choices (“folk morality”); in the second survey, expert responses were based on the expert’s application of different formal ethical frameworks to each scenario. We observed that most of the formal ethical frameworks we included in the survey (Utilitarianism, Kantian Ethics, Ethics of Care and Virtue Ethics) and “folk morality” were conservative toward deception in the high-risk task with an older adult when both the adult and the child had significant performance deficiencies. 
    more » « less
  2. null (Ed.)
    A quiet revolution is afoot in the field of law. Technical systems employing algorithms are shaping and displacing professional decision making, and they are disrupting and restructuring relationships between law firms, lawyers, and clients. Decision-support systems marketed to legal professionals to support e-discovery—generally referred to as “technology assisted review” (TAR)—increasingly rely on “predictive coding”: machine-learning techniques to classify and predict which of the voluminous electronic documents subject to litigation should be withheld or produced to the opposing side. These systems and the companies offering them are reshaping relationships between lawyers and clients, introducing new kinds of professionals into legal practice, altering the discovery process, and shaping how lawyers construct knowledge about their cases and professional obligations. In the midst of these shifting relationships—and the ways in which these systems are shaping the construction and presentation of knowledge—lawyers are grappling with their professional obligations, ethical duties, and what it means for the future of legal practice. Through in-depth, semi-structured interviews of experts in the e-discovery technology space—the technology company representatives who develop and sell such systems to law firms and the legal professionals who decide whether and how to use them in practice—we shed light on the organizational structures, professional rules and norms, and technical system properties that are shaping and being reshaped by predictive coding systems. Our findings show that AI-supported decision systems such as these are reconfiguring professional work practices. In particular, they highlight concerns about potential loss of professional agency and skill, limited understanding and thereby both over- and under reliance on decision-support systems, and confusion about responsibility and accountability as new kinds of technical professionals and technologies are brought into legal practice. The introduction of predictive coding systems and the new professional and organizational arrangements they are ushering into legal practice compound general concerns over the opacity of technical systems with specific concerns about encroachments on the construction of expert knowledge, liability frameworks, and the potential (mis)alignment of machine reasoning with professional logic and ethics. Based on our findings, we conclude that predictive coding tools—and likely other algorithmic systems lawyers use to construct knowledge and reason about legal practice— challenge the current model for evaluating whether and how tools are appropriate for legal practice. As tools become both more complex and more consequential, it is unreasonable to rely solely on legal professionals—judges, law firms, and lawyers—to determine which technologies are appropriate for use. The legal professionals we interviewed report relying on the evaluation and judgment of a range of new technical experts within law firms and, increasingly, third-party vendors and their technical experts. This system for choosing technical systems upon which lawyers rely to make professional decisions—e.g., whether documents are responsive, or whether the standard of proportionality has been met—is no longer sufficient. As the tools of medicine are reviewed by appropriate experts before they are put out for consideration and adoption by medical professionals, we argue that the legal profession must develop new processes for determining which algorithmic tools are fit to support lawyers’ decision making. Relatedly, because predictive coding systems are used to produce lawyers’ professional judgment, we argue they must be designed for contestability— providing greater transparency, interaction, and configurability around embedded choices to ensure decisions about how to embed core professional judgments, such as relevance and proportionality, remain salient and demand engagement from lawyers, not just their technical experts. 
    more » « less
  3. We contend a better way to teach ethics to freshman engineering students would be to address engineering ethics not solely in the abstract of philosophy or moral development, but as situated in the everyday decisions of engineers. Since everyday decisions are not typically a part of university courses, our approach in large lecture classes is to simulate engineering decision-making situations using the role-playing mechanic and narrative structure of a fictional choose-your-own-adventure. Drawing on the contemporary learning theory of situated learning [1], [2], such playful learning may enable instructors to create assignments that induce students to break free of the typical student mindset of finding the “right” answer. Mars: An Ethical Expedition! is an interactive, 12 week, narrative game about the colonization of Mars by various engineering specialists. Students take on the role of a head engineer and are presented with situations that require high-stakes decision-making. Various game mechanics induce students to act as they would on-the-fly, within a real engineering project context, using personal reasoning and richly context-dependent justifications, rather than simply right/wrong answers. Each segment of the game is presented in audio and text that ends with a binary decision that determines what will happen next in the story. Historically, this game had been led by an instructor and played weekly, as a whole-class assignment, completed at the beginning of class. The class votes and the majority option is presented next. In addition to the central decision, there are also follow-up questions at the end of each week that provoke deeper analysis of the situation and reflection on the ethical principles involved. This prototype was initially developed within a learning management system, then supported by the TwineTM game engine, and studied in use in our 2021 NSF EETHICS grant. In 2022-23 the game was redesigned and extended using the GodotTM game engine. In addition to streamlining the gameplay loop and reducing the set-up and data management required by instructors, this redesign supported instructors with an option to allow the game to be student-paced and played by individual students or to keep the instructor-led 12 week whole-class playstyle. Our proposed driving research question is "In what ways does individual student play differ from whole class instructor-led play with regard to learning that ethical behavior is situated?" In the next phase of our ongoing investigation, we plan to further evaluate the use of playful assessment to estimate its validity and reliability in comparison to current best practices of engineering ethics assessment. 
    more » « less
  4. The field of machine ethics in the process of designing and developing the computational underpinnings necessary for a robot to make ethical decisions in real-world environments. Yet a key issue faced by machine ethics researchers is the apparent lack of consensus as to the existence and nature of a correct moral theory. Our research seeks to grapple with, and perhaps sidestep, this age-old and ongoing philosophical problem by creating a robot architecture that does not strictly rely on one particular ethical theory. Rather, it would be informed by the insights gleaned from multiple ethical frameworks, perhaps including Kantianism, Utilitarianism, and Ross’s duty based ethical theory, and by moral emotions. Arguably, moral emotions are an integral part of a human’s ethical decision-making process and thus need to be accounted for if robots are to make decisions that roughly approximate how humans navigate through ethically complex circumstances. The aim of this presentation is to discuss the philosophical aspects of our approach. 
    more » « less
  5. The field of machine ethics in the process of designing and developing the computational underpinnings necessary for a robot to make ethical decisions in real-world environments. Yet a key issue faced by machine ethics researchers is the apparent lack of consensus as to the existence and nature of a correct moral theory. Our research seeks to grapple with, and perhaps sidestep, this age-old and ongoing philosophical problem by creating a robot architecture that does not strictly rely on one particular ethical theory. Rather, it would be informed by the insights gleaned from multiple ethical frameworks, perhaps including Kantianism, Utilitarianism, and Ross’s duty-based ethical theory, and by moral emotions. Arguably, moral emotions are an integral part of a human’s ethical decision-making process and thus need to be accounted for if robots are to make decisions that roughly approximate how humans navigate through ethically complex circumstances. The aim of this presentation is to discuss the philosophical aspects of our approach. 
    more » « less