skip to main content

This content will become publicly available on January 1, 2023

Title: In situ bidirectional human-robot value alignment.
A prerequisite for social coordination is bidirectional communication between teammates, each playing two roles simultaneously: as receptive listeners and expressive speakers. For robots working with humans in complex situations with multiple goals that differ in importance, failure to fulfill the expectation of either role could undermine group performance due to misalignment of values between humans and robots. Specifically, a robot needs to serve as an effective listener to infer human users’ intents from instructions and feedback and as an expressive speaker to explain its decision processes to users. Here, we investigate how to foster effective bidirectional human-robot communications in the context of value alignment—collaborative robots and users form an aligned understanding of the importance of possible task goals. We propose an explainable artificial intelligence (XAI) system in which a group of robots predicts users’ values by taking in situ feedback into consideration while communicating their decision processes to users through explanations. To learn from human feedback, our XAI system integrates a cooperative communication model for inferring human values associated with multiple desirable goals. To be interpretable to humans, the system simulates human mental dynamics and predicts optimal explanations using graphical models. We conducted psychological experiments to examine the core components of more » the proposed computational framework. Our results show that real-time human-robot mutual understanding in complex cooperative tasks is achievable with a learning model based on bidirectional communication. We believe that this interaction framework can shed light on bidirectional value alignment in communicative XAI systems and, more broadly, in future human-machine teaming systems. « less
; ; ; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Science robotics
Sponsoring Org:
National Science Foundation
More Like this
  1. As intelligent systems gain autonomy and capability, it becomes vital to ensure that their objectives match those of their human users; this is known as the value-alignment problem. In robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and adapting to their users’ objectives as they go.We argue that a meaningful solution to value alignment must combine multi-agent decision theory with rich mathematical models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. We present a solution to the cooperative inverse reinforcement learning (CIRL) dynamic game based on well-established cognitive models of decision making and theory of mind. The solution captures a key reciprocity relation: the human will not plan her actions in isolation, but rather reason pedagogically about how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. To our knowledge, this work constitutes the first formal analysis of value alignment grounded in empirically validated cognitive models.
  2. Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for social robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representationsmore »of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications.« less
  3. This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Human environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unifiedmore »Planner for SAN that can plan and execute trajectories that are human-friendly for an autonomously sensed interaction context. Our approach augments the navigation stack of Robot Operating System (ROS) utilizing machine learn- ing and optimization tools. We modified the ROS navigation stack using a machine learning-based context classifier and a PaCcET based local planner for us to achieve the goals of UP- SAN. We discuss our preliminary results and concrete plans on putting the pieces together in achieving UP-SAN.« less
  4. Over the past few decades, there have been many studies of human-human physical interaction to better understand why humans physically interact so effectively and how dyads outperform individuals in certain motor tasks. Because of the different methodologies and experimental setups in these studies, however, it is difficult to draw general conclusions as to the reasons for this improved performance. In this study, we propose an open-source experimental framework for the systematic study of the effect of human-human interaction, as mediated by robots, at the ankle joint. We also propose a new framework to study various interactive behaviors (i.e., collaborative, cooperative, and competitive tasks) that can be emulated using a virtual spring connecting human pairs. To validate the proposed experimental framework, we perform a transparency analysis, which is closely related to haptic rendering performance. We compare muscle EMG and ankle motion data while subjects are barefoot, attached to the unpowered robot, and attached to the powered robot implementing transparency control. We also validate the performance in rendering a virtual springs covering a range of stiffness values (5-50 Nm/rad) while the subjects track several desired trajectories(sine waves at frequencies between 0.1 and 1.1 Hz). Finally, we study the performance of the systemmore »in human-human interaction under nine different interactive conditions. Finally, we demonstrate the feasibility of the system in studying human-human interaction under different interactive behaviors.« less
  5. A means to communicate by touch is established when two humans grasp a common rigid object, and such communication is thought to play a role in the superior performance two humans acting together are able to demonstrate over either agent acting alone. But the superior performance demonstrated by dyads, whether in making point-to-point movements or tracking unpredictable targets, is strictly empirical to date. Mechanistic accounts for the performance improvement and explanations relying on haptic communication have been lacking. In this paper we develop a model of haptic communication across a linkage connecting two agents that provides an explicit means for the dyad to achieve a higher loop gain than either agent acting alone and higher than the two agents acting together without haptic feedback. We show that haptic communication closes an additional feedback loop through the linkage and the sensorimotor control systems of both agents. This feedback loop contributes a new factor to the loop gain and thus a definitive mechanism for the dyad to improve performance. Our model predicts higher internal forces with haptic communication, which have previously been observed. Additional testable hypotheses emerge from the model and create a promising future means to transfer human-human dyad behaviors tomore »human-robot teams.« less