skip to main content


Title: Teaching a Robot Tasks of Arbitrary Complexity via Human Feedback
This paper addresses the problem of training a robot to carry out temporal tasks of arbitrary complexity via evaluative human feedback that can be inaccurate. A key idea explored in our work is a kind of curriculum learning—training the robot to master simple tasks and then building up to more complex tasks. We show how a training procedure, using knowledge of the formal task representation, can decompose and train any task efficiently in the size of its representation. We further provide a set of experiments that support the claim that non-expert human trainers can decompose tasks in a way that is consistent with our theoretical results, with more than half of participants successfully training all of our experimental missions. We compared our algorithm with existing approaches and our experimental results suggest that our method outperforms alternatives, especially when feedback contains mistakes.  more » « less
Award ID(s):
1836948
NSF-PAR ID:
10162890
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Human Robot Interaction (HRI'20)
Page Range / eLocation ID:
649 to 657
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Teleoperation—i.e., controlling a robot with human motion—proves promising in enabling a humanoid robot to move as dynamically as a human. But how to map human motion to a humanoid robot matters because a human and a humanoid robot rarely have identical topologies and dimensions. This work presents an experimental study that utilizes reaction tests to compare joint space and task space mappings for dynamic teleoperation of an anthropomorphic robotic arm that possesses human-level dynamic motion capabilities. The experimental results suggest that the robot achieved similar and, in some cases, human-level dynamic performances with both mappings for the six participating human subjects. All subjects became proficient at teleoperating the robot with both mappings after practice, despite that the subjects and the robot differed in size and link length ratio and that the teleoperation required the subjects to move unintuitively. Yet, most subjects developed their teleoperation proficiencies more quickly with task space mapping than with joint space mapping after similar amounts of practice. This study also indicates the potential values of three-dimensional task space mapping, a teleoperation training simulator, and force feedback to the human pilot for intuitive and dynamic teleoperation of a humanoid robot’s arms. 
    more » « less
  2. null (Ed.)
    Teleoperation—i.e., controlling a robot with human motion—proves promising in enabling a humanoid robot to move as dynamically as a human. But how to map human motion to a humanoid robot matters because a human and a humanoid robot rarely have identical topologies and dimensions. This work presents an experimental study that utilizes reaction tests to compare joint space and task space mappings for dynamic teleoperation of an anthropomorphic robotic arm that possesses human-level dynamic motion capabilities. The experimental results suggest that the robot achieved similar and, in some cases, human-level dynamic performances with both mappings for the six participating human subjects. All subjects became proficient at teleoperating the robot with both mappings after practice, despite that the subjects and the robot differed in size and link length ratio and that the teleoperation required the subjects to move unintuitively. Yet, most subjects developed their teleoperation proficiencies more quickly with task space mapping than with joint space mapping after similar amounts of practice. This study also indicates the potential values of three-dimensional task space mapping, a teleoperation training simulator, and force feedback to the human pilot for intuitive and dynamic teleoperation of a humanoid robot’s arms. 
    more » « less
  3. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface of a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw 
    more » « less
  4. Using the context of human-supervised object collection tasks, we explore policies for a robot to seek assistance from a human supervisor and avoid loss of human trust in the robot. We consider a human-robot interaction scenario in which a mobile manipulator chooses to collect objects either autonomously or through human assistance; while the human supervisor monitors the robot’s operation, assists when asked, or intervenes if the human perceives that the robot may not accomplish its goal. We design an optimal assistance-seeking policy for the robot using a Partially Observable Markov Decision Process (POMDP) setting in which human trust is a hidden state and the objective is to maximize collaborative performance. We conduct two sets of human-robot interaction experiments. The data from the first set of experiments is used to estimate POMDP parameters, which are used to compute an optimal assistance-seeking policy that is used in the second experiment. For most participants, the estimated POMDP reveals that humans are more likely to intervene when their trust is low and the robot is performing a high-complexity task; and that the robot asking for assistance in high-complexity tasks can increase human trust in the robot. Our experimental results show that the proposed trust-aware policy yields superior performance compared with an optimal trust-agnostic policy. 
    more » « less
  5. Aleksandra Faust, David Hsu (Ed.)
    Modern Reinforcement Learning (RL) algorithms are not sample efficient to train on multi-step tasks in complex domains, impeding their wider deployment in the real world. We address this problem by leveraging the insight that RL models trained to complete one set of tasks can be repurposed to complete related tasks when given just a handful of demonstrations. Based upon this insight, we propose See-SPOT-Run (SSR), a new computational approach to robot learning that enables a robot to complete a variety of real robot tasks in novel problem domains without task-specific training. SSR uses pretrained RL models to create vectors that represent model, task, and action relevance in demonstration and test scenes. SSR then compares these vectors via our Cycle Consistency Distance (CCD) metric to determine the next action to take. SSR completes 58% more task steps and 20% more trials than a baseline few-shot learning method that requires task-specific training. SSR also achieves a four order of magnitude improvement in compute efficiency and a 20% to three order of magnitude improvement in sample efficiency compared to the baseline and to training RL models from scratch. To our knowledge, we are the first to address multi-step tasks from demonstration on a real robot without task-specific training, where both the visual input and action space output are high dimensional. Code is available in the supplement. 
    more » « less