skip to main content


Title: Toward Hybrid Relational-Normative Models of Robot Cognition
Most previous work on enabling robots’ moral competence has used norm-based systems of moral reasoning. However, a number of limitations to norm-based ethical theories have been widely acknowledged. These limitations may be addressed by role-based ethical theories, which have been extensively discussed in the philosophy of technology literature but have received little attention within robotics. My work proposes a hybrid role/norm-based model of robot cognitive processes including moral cognition.  more » « less
Award ID(s):
1909847 1849348
NSF-PAR ID:
10265915
Author(s) / Creator(s):
Date Published:
Journal Name:
ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
568 to 570
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Knowledge quality assessment (KQA) has been developed in order to analyze the role of knowledge in situations of high stakes and urgency when characterized by deep uncertainty and ignorance. Governing coastal flood risk in the face of climate change is typical of such situations. These are situations which limit the ability to establish objective, reliable, and valid facts. This paper aims to identify the moral frameworks that stakeholders use to judge flood risk situations under climate change, and infer from these knowledge legitimacy criteria. Knowledge legitimacy, defined as being respectful of stakeholders' divergent values and beliefs, is one of the three broad quality criteria that have been proposed in order to assess knowledge quality in such situations; credibility (as scientific adequacy) and salience (relevance to the needs of decision makers) being the two others. Knowledge legitimacy is essentially the subject of a literature analyzing, ex-post (once knowledge has been deployed), how stakeholders' participation is a factor contributing to knowledge legitimacy. Very little is known about ex-ante characteristics (i.e.: that can be observed, determined, before knowledge is deployed) that would make some types of knowledge more legitimate (i.e., respectful of stakeholders' divergent values and beliefs) than others. We see this as a significant blind spot in the analysis of knowledge and its role under deep uncertainty. In this paper we posit that this blind spot may be addressed, in part. In order to achieve this we first identify the ethical frameworks that stakeholders use to judge a situation of risk under rapidly changing conditions. We then associate these ethical frameworks to characteristics of knowledge. We tested this conceptualization through a case study approach centered on flood risk on the French Atlantic coast. We have adopted a narrative approach to the analysis of two diachronic corpora consisting of interviews conducted in 2010–2012 (33 interviews) and 2020 (15 interviews). These were approached as narratives of a risk situation. We thematically coded these along themes considered as metanarratives. These metanarratives are associated with predefined (deontology, consequentialism, virtue ethics) and emerging (discourse ethics, connection ethics, and a naturalistic ethic) ethical theories. Our results show that, when faced with flood risks, stakeholders tell stories that mobilizes several metaethical frameworks as guiding principles in the form of both procedural and substantive injunctions. In order to respect what we interpret as manifestations of the moral stances of stakeholders, our results indicate that knowledge legitimacy may be assessed against the following criteria: lability, debatability and adaptability; degree of co-production invested; place-based approach; ability to include lessons that would be given by nature. The operationalization of these criteria is promising in a time when the knowledge that is used for decision making under certainty is increasingly contested on the ground of its legitimacy. 
    more » « less
  2. Because robots are perceived as moral agents, they must behave in accordance with human systems of morality. This responsibility is especially acute for language-capable robots because moral communication is a method for building moral ecosystems. Language capable robots must not only make sure that what they say adheres to moral norms; they must also actively engage in moral communication to regulate and encourage human compliance with those norms. In this work, we describe four experiments (total N =316) across which we systematically evaluate two different moral communication strategies that robots could use to influence human behavior: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics. Specifically, we assess the effectiveness of robots that use these two strategies to encourage human compliance with norms grounded in expectations of behavior associated with certain social roles. Our results suggest two major findings, demonstrating the importance of moral reflection and moral practice for effective moral communication: First, opportunities for reflection on ethical principles may increase the efficacy of robots’ role-based moral language; and second, following robots’ moral language with opportunities for moral practice may facilitate role-based moral cultivation. 
    more » « less
  3. null (Ed.)
    We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot’s advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty. 
    more » « less
  4. Empirical studies have suggested that language-capable robots have the persuasive power to shape the shared moral norms based on how they respond to human norm violations. This persuasive power presents cause for concern, but also the opportunity to persuade humans to cultivate their own moral development. We argue that a truly socially integrated and morally competent robot must be willing to communicate its objection to humans’ proposed violations of shared norms by using strategies such as blame-laden rebukes, even if doing so may violate other standing norms, such as politeness. By drawing on Confucian ethics, we argue that a robot’s ability to employ blame-laden moral rebukes to respond to unethical human requests is crucial for cultivating a flourishing “moral ecology” of human–robot interaction. Such positive moral ecology allows human teammates to develop their own moral reflection skills and grow their own virtues. Furthermore, this ability can and should be considered as one criterion for assessing artificial moral agency. Finally, this paper discusses potential implications of the Confucian theories for designing socially integrated and morally competent robots. 
    more » « less
  5. What responsibility do faculty leaders have to understand the ethics frameworks of their faculty colleagues? To what extent do leaders have capacity to enact that responsibility, given constraints on curricular space, expertise, basic communication skills, and the political climate? The landscape of disciplinary ethics frameworks, or the value content and structured experiences that shape professional development and disciplinary enculturation, reaches wide across the curriculum and deep into the discipline [1][2][3]. This landscape might include frameworks ranging from accrediting bodies and institutional compliance structures to state and national laws and departmental cultures. Coupled to the diversity of specializations within a single discipline, this landscape is richly complex. Yet, faculty leaders play important roles in shaping departmental and programmatic cultures, which are at least partially informed by the disciplinary value landscape. The objective of this paper is to build on previous work [4] to explore this problem of faculty leader responsibility by contrasting faculty leaders’ perspectives on disciplinary values with the values evidenced by their professional organizations. To evidence this contrast, we compare data from interviews with faculty leaders in departments of biology and computer science at a large metropolitan high research intensive HSI-serving university against data scraped from the websites of professional organizations those leaders reference as ethics frameworks. We analyze both sets of data using content analytics methods to examine qualitative and quantitative differences between them. This comparison is part of a larger institutional study looking at this problem across a wide diversity of disciplines [5]. We find an anticipated disparity between identification of the disciplinary frameworks and their content, opening space for discussion about the impact of national ethics frameworks at the local disciplinary level. But we also find an unanticipated diversity of types of ethics frameworks identified by faculty leaders, demonstrating the complexity of just how value frameworks inform disciplinary enculturation through leadership and training. Based on our findings, we articulate the relationship between responsibility and accountability [6] in the process of values-driven disciplinary enculturation. This work is relevant to ethics in that if ethics frameworks and the values they encode play a role in disciplinary enculturation, and there is a disconnect between faculty leaders perceptions of ethics frameworks and their disciplines explicit communications of their values, then the processes and practices of disciplinary enculturation could be more tightly connected to disciplinary values – resulting in more richly ethical professionals. *note: a version of this abstract is also submitted concurrently as a presentation to the Association of Practical and Professional Ethics (APPE), which does not publish abstracts or proceedings papers. [1] Tuana, Nancy. 2013. “Embedding Philosophers in the Practices of Science: Bringing Humanities to the Sciences.” Synthese 190(11): 1955-1973. [2] West, C. and Chur-Hansen, A. (2004). Ethical Enculturation: The Informal and Hidden Ethics Curricula at an Australian Medical School. Focus on Health Professional Education: a Multi-Disciplinary Journal 6(1): 85-99. [3] Nieusma, D. and Cieminski, M. (2018). Ethics Education as Enculturation: Student Learning of Personal, Social, and Professional Responsibility. 2018 ASEE Annual Conference and Exposition. Paper 23665. [4] Pinkert, L.A., Taylor, L., Beever, J., Kuebler, S.M., Klonoff, E. (2022). Disciplinary Leaders Perceptions of Ethics: An Interview-Based Study of Ethics Frameworks. 2022 ASEE Annual Conference and Exposition. https://peer.asee.org/41614. [5] National Science Foundation, “Award Abstract # 2024296 Institutional Transformation: Intersections of Moral Foundations and Ethics Frameworks in STEM Enculturation.” https://www.nsf.gov/awardsearch/showAward?AWD_ID=2024296, 2020. 
    more » « less