skip to main content


Title: Physical interaction as communication: Learning robot objectives online from human corrections

When a robot performs a task next to a human, physical interaction is inevitable: the human might push, pull, twist, or guide the robot. The state of the art treats these interactions as disturbances that the robot should reject or avoid. At best, these robots respond safely while the human interacts; but after the human lets go, these robots simply return to their original behavior. We recognize that physical human–robot interaction (pHRI) is often intentional: the human intervenes on purpose because the robot is not doing the task correctly. In this article, we argue that when pHRI is intentional it is also informative: the robot can leverage interactions to learn how it should complete the rest of its current task even after the person lets go. We formalize pHRI as a dynamical system, where the human has in mind an objective function they want the robot to optimize, but the robot does not get direct access to the parameters of this objective: they are internal to the human. Within our proposed framework human interactions become observations about the true objective. We introduce approximations to learn from and respond to pHRI in real-time. We recognize that not all human corrections are perfect: often users interact with the robot noisily, and so we improve the efficiency of robot learning from pHRI by reducing unintended learning. Finally, we conduct simulations and user studies on a robotic manipulator to compare our proposed approach with the state of the art. Our results indicate that learning from pHRI leads to better task performance and improved human satisfaction.

 
more » « less
NSF-PAR ID:
10305590
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
The International Journal of Robotics Research
Volume:
41
Issue:
1
ISSN:
0278-3649
Page Range / eLocation ID:
p. 20-44
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background

    In Physical Human–Robot Interaction (pHRI), the need to learn the robot’s motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.

    Objective

    The aim of this study was to test eye-tracking measures’ sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human–robot collaboration tasks involving an industrial robot for object comanipulation.

    Methods

    Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.

    Results

    Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.

    Conclusion

    The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.

    Application

    Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.

     
    more » « less
  2. Robots increasingly interact with humans through touch, where people are touching or being touched by robots. Yet, little is known about how such interactions shape a user’s experience. To inform future work in this area, we conduct a systematic review of 44 studies on physical human-robot interaction (pHRI). Our review examines the parameters of the touch (e.g., the role of touch, location), the experimental variations used by researchers, and the methods used to assess user experience. We identify five facets of user experience metrics from the questionnaire items and data recordings for pHRI studies. We highlight gaps and methodological issues in studying pHRI and compare user evaluation trends with the Human-Computer Interaction (HCI) literature. Based on the review, we propose a conceptual model of the pHRI experience. The model highlights the components of such touch experiences to guide the design and evaluation of physical interactions with robots and inform future user experience questionnaire development. 
    more » « less
  3. Ferretti, Gianni (Ed.)
    Many anticipated physical human-robot interaction (pHRI) applications in the near future are overground tasks such as walking assistance. For investigating the biomechanics of human movement during pHRI, this work presents Ophrie, a novel interactive robot dedicated for physical interaction tasks with a human in overground settings. Unique design requirements for pHRI were considered in implementing the one-arm mobile robot, such as the low output impedance and the ability to apply small interaction forces. The robot can measure the human arm stiffness, an important physical quantity that can reveal human biomechanics during overground pHRI, while the human walks alongside the robot. This robot is anticipated to enable novel pHRI experiments and advance our understanding of intuitive and effective overground pHRI. 
    more » « less
  4. Early research on physical human–robot interaction (pHRI) has necessarily focused on device design—the creation of compliant and sensorized hardware, such as exoskeletons, prostheses, and robot arms, that enables people to safely come in contact with robotic systems and to communicate about their collaborative intent. As hardware capabilities have become sufficient for many applications, and as computing has become more powerful, algorithms that support fluent and expressive use of pHRI systems have begun to play a prominent role in determining the systems’ usefulness. In this review, we describe a selection of representative algorithmic approaches that regulate and interpret pHRI, describing the progression from algorithms based on physical analogies, such as admittance control, to computational methods based on higher-level reasoning, which take advantage of multimodal communication channels. Existing algorithmic approaches largely enable task-specific pHRI, but they do not generalize to versatile human–robot collaboration. Throughout the review and in our discussion of next steps, we therefore argue that emergent embodied dialogue—bidirectional, multimodal communication that can be learned through continuous interaction—is one of the next frontiers of pHRI. 
    more » « less
  5. This work challenges the common assumption in physical human-robot interaction (pHRI) that the movement intention of a human user can be simply modeled with dynamic equations relating forces to movements, regardless of the user. Studies in physical human-human interaction (pHHI) suggest that interaction forces carry sophisticated information that reveals motor skills and roles in the partnership and even promotes adaptation and motor learning. In this view, simple force-displacement equations often used in pHRI studies may not be sufficient. To test this, this work measured and analyzed the interaction forces (F) between two humans as the leader guided the blindfolded follower on a randomly chosen path. The actual trajectory of the follower was transformed to the velocity commands (V) that would allow a hypothetical robot follower to track the same trajectory. Then, possible analytical relationships between F and V were obtained using neural network training. Results suggest that while F helps predict V, the relationship is not straightforward, that seemingly irrelevant components of F may be important, that force-velocity relationships are unique to each human follower, and that human neural control of movement may affect the prediction of the movement intent. It is suggested that user-specific, stereotype-free controllers may more accurately decode human intent in pHRI. 
    more » « less