skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Why We Should Build Robots That Both Teach and Learn
In this paper, we argue in favor of creating robots that both teach and learn. We propose a methodology for building robots that can learn a skill from an expert, perform the skill independently or collaboratively with the expert, and then teach the same skill to a novice. This requires combining insights from learning from demonstration, human-robot collaboration, and intelligent tutoring systems to develop knowledge representations that can be shared across all three components. As a case study for our methodology, we developed a glockenspiel-playing robot. The robot begins as a novice, learns how to play musical harmonies from an expert, collaborates with the expert to complete harmonies, and then teaches the harmonies to novice users. This methodology allows for new evaluation metrics that provide a thorough understanding of how well the robot has learned and enables a robot to act as an efficient facilitator for teaching across temporal and geographic separation.  more » « less
Award ID(s):
1813651
PAR ID:
10284318
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
HRI '21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
187 to 196
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The introduction of collaborative robots (cobots) into the workplace has presented both opportunities and challenges for those seeking to utilize their functionality. Prior research has shown that despite the capabilities afforded by cobots, there is a disconnect between those capabilities and the applications that they currently are deployed in, partially due to a lack of effective cobot-focused instruction in the field. Experts who work successfully within this collaborative domain could offer insight into the considerations and process they use to more effectively capture this cobot capability. Using an analysis of expert insights in the collaborative interaction design space, we developed a set of Expert Frames based on these insights and integrated these Expert Frames into a new training and programming system that can be used to teach novice operators to think, program, and troubleshoot in ways that experts do. We present our system and case studies that demonstrate how Expert Frames provide novice users with the ability to analyze and learn from complex cobot application scenarios. 
    more » « less
  2. In this paper, a hybrid shared controller is proposed for assisting human novice users to emulate human expert users within a human-automation interaction framework. This work is motivated to let human novice users learn the skills of human expert users using automation as a medium. Automation interacts with human users in two folds: it learns how to optimally control the system from the experts demonstrations by offline computation, and assists the novice in real time without excess amount of intervention based on the inference of the novice’s skill-level within our properly designed shared controller. Automation takes more control authority when the novices skill-level is poor, or it allows the novice to have more control authority when his/her skill-level is close to that of the expert to let the novice learn from his/her own control experience. The proposed scheme is shown to be able to improve the system performance while minimizing the intervention from the automation, which is demonstrated via an illustrative human-in-the-loop application example. 
    more » « less
  3. null (Ed.)
    With growing access to versatile robotics, it is beneficial for end users to be able to teach robots tasks without needing to code a control policy. One possibility is to teach the robot through successful task executions. However, near-optimal demonstrations of a task can be difficult to provide and even successful demonstrations can fail to capture task aspects key to robust skill replication. Here, we propose a learning from demonstration (LfD) approach that enables learning of robust task definitions without the need for near-optimal demonstrations. We present a novel algorithmic framework for learning task specifications based on the ergodic metric—a measure of information content in motion. Moreover, we make use of negative demonstrations— demonstrations of what not to do—and show that they can help compensate for imperfect demonstrations, reduce the number of demonstrations needed, and highlight crucial task elements improving robot performance. In a proof-of-concept example of cart-pole inversion, we show that negative demonstrations alone can be sufficient to successfully learn and recreate a skill. Through a human subject study with 24 participants, we show that consistently more information about a task can be captured from combined positive and negative (posneg) demonstrations than from the same amount of just positive demonstrations. Finally, we demonstrate our learning approach on simulated tasks of target reaching and table cleaning with a 7-DoF Franka arm. Our results point towards a future with robust, data efficient LfD for novice users. 
    more » « less
  4. Robot-mediated therapy is an emerging field of research seeking to improve therapy for children with Autism Spectrum Disorder (ASD). Current approaches to autonomous robot-mediated therapy often focus on having a robot teach a single skill to children with ASD and lack a personalized approach to each individual. More recently, Learning from Demonstration (LfD) approaches are being explored to teach socially assistive robots to deliver personalized interventions after they have been deployed but these approaches require large amounts of demonstrations and utilize learning models that cannot be easily interpreted. In this work, we present a LfD system capable of learning the delivery of autism therapies in a data-efficient manner utilizing learning models that are inherently interpretable. The LfD system learns a behavioral model of the task with minimal supervision via hierarchical clustering and then learns an interpretable policy to determine when to execute the learned behaviors. The system is able to learn from less than an hour of demonstrations and for each of its predictions can identify demonstrated instances that contributed to its decision. The system performs well under unsupervised conditions and achieves even better performance with a low-effort human correction process that is enabled by the interpretable model. 
    more » « less
  5. In this paper, we propose a human-automation interaction scheme to improve the task performance of novice human users with different skill levels. The proposed scheme includes two interaction modes: learn from experts mode and assist novices mode. In the learn from experts mode, the automation learns from a human expert user such that the awareness of task objective is obtained. Based on the learned task objective, in the assist novices mode, the automation customizes its control parameter to assist a novice human user towards emulating the performance of the expert human user. We experimentally test the proposed human-automation scheme in a designed quadrotor simulation environment, and the results show that the proposed approach is capable of adapting to and assisting the novice human user to achieve the performance that emulates the expert human user. 
    more » « less