As intelligent systems gain autonomy and capability, it becomes vital to ensure that their objectives match those of their human users; this is known as the value-alignment problem. In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users’ objectives as they go.We argue that a meaningful solution to value alignment must combine multi-agent decision theory with rich mathematical models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. We present a solution to the cooperative inverse reinforcement learning (CIRL) dynamic game based on well-established cognitive models of decision making and theory of mind. The solution captures a key reciprocity relation: the human will not plan her actions in isolation, but rather reason pedagogically about how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. To our knowledge, this work constitutes the first formal analysis of value alignment grounded in empirically validated cognitive models.
In situ bidirectional human-robot value alignment.
A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI)
system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of more »
- Award ID(s):
- 2015577
- Publication Date:
- NSF-PAR ID:
- 10351399
- Journal Name:
- Science robotics
- ISSN:
- 2470-9476
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representationsmore »
-
This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Human environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unifiedmore »
-
Over the past few decades, there have been many studies of human-human physical interaction to better understand why humans physically interact so effectively and how dyads outperform individuals in certain motor tasks. Because of the different methodologies and experimental setups in these studies, however, it is difficult to draw general conclusions as to the reasons for this improved performance. In this study, we propose an open-source experimental framework for the systematic study of the effect of human-human interaction, as mediated by robots, at the ankle joint. We also propose a new framework to study various interactive behaviors (i.e., collaborative, cooperative, and competitive tasks) that can be emulated using a virtual spring connecting human pairs. To validate the proposed experimental framework, we perform a transparency analysis, which is closely related to haptic rendering performance. We compare muscle EMG and ankle motion data while subjects are barefoot, attached to the unpowered robot, and attached to the powered robot implementing transparency control. We also validate the performance in rendering a virtual springs covering a range of stiffness values (5-50 Nm/rad) while the subjects track several desired trajectories(sine waves at frequencies between 0.1 and 1.1 Hz). Finally, we study the performance of the systemmore »
-
A means to communicate by touch is established when two humans grasp a common rigid object, and such communication is thought to play a role in the superior performance two humans acting together are able to demonstrate over either agent acting alone. But the superior performance demonstrated by dyads, whether in making point-to-point movements or tracking unpredictable targets, is strictly empirical to date. Mechanistic accounts for the performance improvement and explanations relying on haptic communication have been lacking. In this paper we develop a model of haptic communication across a linkage connecting two agents that provides an explicit means for the dyad to achieve a higher loop gain than either agent acting alone and higher than the two agents acting together without haptic feedback. We show that haptic communication closes an additional feedback loop through the linkage and the sensorimotor control systems of both agents. This feedback loop contributes a new factor to the loop gain and thus a definitive mechanism for the dyad to improve performance. Our model predicts higher internal forces with haptic communication, which have previously been observed. Additional testable hypotheses emerge from the model and create a promising future means to transfer human-human dyad behaviors tomore »