skip to main content


Title: Task-Oriented Robot-to-Human Handovers in Collaborative Tool-Use Tasks
Robot-to-human handovers are common exercises in many robotics application domains. The requirements of handovers may vary across these different domains. In this paper, we first devised a taxonomy to organize the diverse and sometimes contradictory requirements. Among these, task-oriented handovers are not well-studied but important because the purpose of the handovers in human-robot collaboration (HRC) is not merely to pass an object from a robot to a human receiver, but to enable the receiver to use it in a subsequent tool-use task. A successful task-oriented handover should incorporate task-related information - orienting the tool such that the human can grasp it in a way that is suitable for the task. We identified multiple difficulty levels of task-oriented handovers, and implemented a system to generate handovers with novel tools on a physical robot. Unlike previous studies on task-oriented handovers, we trained the robot with tool-use demonstrations rather than handover demonstrations, since task-oriented handovers are dependent on the tool usages in the subsequent task. We demonstrated that our method can adapt to all difficulty levels of task-oriented handovers, including tasks that matched the typical usage of the tool, tasks that required an improvised or unusual usage of the tool, and tasks where the handover was adapted to the pose of a manipulandum.  more » « less
Award ID(s):
1955653 1928448 2106690 1813651
NSF-PAR ID:
10354176
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The effectiveness of human-robot interactions critically depends on the success of computational efforts to emulate human inference of intent, anticipation of action, and coordination of movement. To this end, we developed two models that leverage a well described feature of human movement: Gaussian-shaped submovements in velocity profiles, to act as robotic surrogates for human inference and trajectory planning in a handover task. We evaluated both models based on how early in a handover movement the inference model can obtain accurate estimates of handover location and timing, and how similar model trajectories are to human receiver trajectories. Initial results using one participant dyad demonstrate that our inference model can accurately predict location and handover timing, while the trajectory planner can use these predictions to provide a human-like trajectory plan for the robot. This approach delivers promising performance while remaining grounded in physiologically meaningful Gaussian-shaped velocity profiles of human motion. 
    more » « less
  2. As technology advances, the need for safe, efficient, and collaborative human-robot-teams has become increasingly important. One of the most fundamental collaborative tasks in any setting is the object handover. Human-to-robot handovers can take either of two approaches: (1) direct hand-to-hand or (2) indirect hand-to-placement-to-pick-up. The latter approach ensures minimal contact between the human and robot but can also result in increased idle time due to having to wait for the object to first be placed down on a surface. To minimize such idle time, the robot must preemptively predict the human intent of where the object will be placed. Furthermore, for the robot to preemptively act in any sort of productive manner, predictions and motion planning must occur in real-time. We introduce a novel prediction-planning pipeline that allows the robot to preemptively move towards the human agent's intended placement location using gaze and gestures as model inputs. In this paper, we investigate the performance and drawbacks of our early intent predictor-planner as well as the practical benefits of using such a pipeline through a human-robot case study. 
    more » « less
  3. Ubiquitous robot control and human-robot collaboration using smart devices poses a challenging problem primarily due to strict accuracy requirements and sparse information. This paper presents a novel approach that incorporates a probabilistic differentiable filter, specifically the Differentiable Ensemble Kalman Filter (DEnKF), to facilitate robot control solely using Inertial Measurement Units (IMUs) from a smartwatch and a smartphone. The implemented system is cost-effective and achieves accurate estimation of the human pose state. Experiment results from human-robot handover tasks underscore that smart devices allow versatile and ubiquitous robot control. 
    more » « less
  4. This paper presents the design of a wearable robotic forearm for close-range human-robot collaboration. The robot's function is to serve as a lightweight supernumerary third arm for shared workspace activities. We present a functional prototype resulting from an iterative design process including several user studies. An analysis of the robot's kinematics shows an increase in reachable workspace by 246 % compared to the natural human reach. The robot's degrees of freedom and range of motion support a variety of usage scenarios with the robot as a collaborative tool, including self-handovers, fetching objects while the human's hands are occupied, assisting human-human collaboration, and stabilizing an object. We analyze the bio-mechanical loads for these scenarios and find that the design is able to operate within human ergonomic wear limits. We then report on a pilot human-robot interaction study that indicates robot autonomy is more task-time efficient and preferred by users when compared to direct voice-control. These results suggest that the design presented here is a promising configuration for a lightweight wearable robotic augmentation device, and can serve as a basis for further research into human-wearable collaboration. 
    more » « less
  5. null (Ed.)
    With growing access to versatile robotics, it is beneficial for end users to be able to teach robots tasks without needing to code a control policy. One possibility is to teach the robot through successful task executions. However, near-optimal demonstrations of a task can be difficult to provide and even successful demonstrations can fail to capture task aspects key to robust skill replication. Here, we propose a learning from demonstration (LfD) approach that enables learning of robust task definitions without the need for near-optimal demonstrations. We present a novel algorithmic framework for learning task specifications based on the ergodic metric—a measure of information content in motion. Moreover, we make use of negative demonstrations— demonstrations of what not to do—and show that they can help compensate for imperfect demonstrations, reduce the number of demonstrations needed, and highlight crucial task elements improving robot performance. In a proof-of-concept example of cart-pole inversion, we show that negative demonstrations alone can be sufficient to successfully learn and recreate a skill. Through a human subject study with 24 participants, we show that consistently more information about a task can be captured from combined positive and negative (posneg) demonstrations than from the same amount of just positive demonstrations. Finally, we demonstrate our learning approach on simulated tasks of target reaching and table cleaning with a 7-DoF Franka arm. Our results point towards a future with robust, data efficient LfD for novice users. 
    more » « less