skip to main content


This content will become publicly available on December 1, 2024

Title: A Spatio-Temporal Prediction and Planning Framework for Proactive Human–Robot Collaboration
Abstract

A significant challenge in human–robot collaboration (HRC) is coordinating robot and human motions. Discoordination can lead to production delays and human discomfort. Prior works seek coordination by planning robot paths that consider humans or their anticipated occupancy as static obstacles, making them nearsighted and prone to entrapment by human motion. This work presents the spatio-temporal avoidance of predictions-prediction and planning framework (STAP-PPF) to improve robot–human coordination in HRC. STAP-PPF predicts multi-step human motion sequences based on the locations of objects the human manipulates. STAP-PPF then proactively determines time-optimal robot paths considering predicted human motion and robot speed restrictions anticipated according to the ISO15066 speed and separation monitoring (SSM) mode. When executing robot paths, STAP-PPF continuously updates human motion predictions. In real-time, STAP-PPF warps the robot’s path to account for continuously updated human motion predictions and updated SSM effects to mitigate delays and human discomfort. Results show the STAP-PPF generates robot trajectories of shorter duration. STAP-PPF robot trajectories also adapted better to real-time human motion deviation. STAP-PPF robot trajectories also maintain greater robot/human separation throughout tasks requiring close human–robot interaction. Tests with an assembly sequence demonstrate STAP-PPF’s ability to predict multi-step human tasks and plan robot motions for the sequence. STAP-PPF also most accurately estimates robot trajectory durations, within 30% of actual, which can be used to adapt the robot sequencing to minimize disruption.

 
more » « less
Award ID(s):
1830383
NSF-PAR ID:
10483550
Author(s) / Creator(s):
;
Editor(s):
Pai Zheng 
Publisher / Repository:
ASME, The American Society of Mechanical Engineers
Date Published:
Journal Name:
Journal of Manufacturing Science and Engineering
Volume:
145
Issue:
12
ISSN:
1087-1357
Subject(s) / Keyword(s):
["human\u2013robot collaboration, human\u2013robot interaction, robot motion planning, control and automation, production systems optimization, robotics and flexible tooling"]
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper addresses human-robot collaboration (HRC) challenges of integrating predictions of human activity to provide a proactive-n-reactive response capability for the robot. Prior works that consider current or predicted human poses as static obstacles are too nearsighted or too conservative in planning, potentially causing delayed robot paths. Alternatively, time-varying prediction of human poses would enable robot paths that avoid anticipated human poses, synchronized dynamically in time and space. Herein, a proactive path planning method, denoted STAP, is presented that uses spatiotemporal human occupancy maps to find robot trajectories that anticipate human movements, allowing robot passage without stopping. In addition, STAP anticipates delays from robot speed restrictions required by ISO/TS 15066 speed and separation monitoring (SSM). STAP also proposes a sampling-based planning algorithm based on RRT* to solve the spatio-temporal motion planning problem and find paths of minimum expected duration. Experimental results show STAP generates paths of shorter duration and greater average robot-human separation distance throughout tasks. Additionally, STAP more accurately estimates robot trajectory durations in HRC, which are useful in arriving at proactive-n-reactive robot sequencing. 
    more » « less
  2. In Human-Robot Collaboration (HRC), robots and humans must work together in shared, overlapping, workspaces to accomplish tasks. If human and robot motion can be coordinated, then collisions between robot and human can seamlessly be avoided without requiring either of them to stop work. A key part of this coordination is anticipating humans’ future motion so robot motion can be adapted proactively. In this work, a generative neural network predicts a multi-step sequence of human poses for tabletop reaching motions. The multi-step sequence is mapped to a time-series based on a human speed versus motion distance model. The input to the network is the human’s reaching target relative to current pelvis location combined with current human pose. A dataset was generated of human motions to reach various positions on or above the table in front of the human starting from a wide variety of initial human poses. After training the network, experiments showed that the predicted sequences generated by this method matched the actual recordings of human motion within an L2 joint error of 7.6 cm and L2 link roll-pitch-yaw error of 0.301 radians on average. This method predicts motion for an entire reach motion without suffering from the exponential propagation of prediction error that limits the horizon of prior works. 
    more » « less
  3. Abstract

    With the development of industrial automation and artificial intelligence, robotic systems are developing into an essential part of factory production, and the human-robot collaboration (HRC) becomes a new trend in the industrial field. In our previous work, ten dynamic gestures have been designed for communication between a human worker and a robot in manufacturing scenarios, and a dynamic gesture recognition model based on Convolutional Neural Networks (CNN) has been developed. Based on the model, this study aims to design and develop a new real-time HRC system based on multi-threading method and the CNN. This system enables the real-time interaction between a human worker and a robotic arm based on dynamic gestures. Firstly, a multi-threading architecture is constructed for high-speed operation and fast response while schedule more than one task at the same time. Next, A real-time dynamic gesture recognition algorithm is developed, where a human worker’s behavior and motion are continuously monitored and captured, and motion history images (MHIs) are generated in real-time. The generation of the MHIs and their identification using the classification model are synchronously accomplished. If a designated dynamic gesture is detected, it is immediately transmitted to the robotic arm to conduct a real-time response. A Graphic User Interface (GUI) for the integration of the proposed HRC system is developed for the visualization of the real-time motion history and classification results of the gesture identification. A series of actual collaboration experiments are carried out between a human worker and a six-degree-of-freedom (6 DOF) Comau industrial robot, and the experimental results show the feasibility and robustness of the proposed system.

     
    more » « less
  4. null (Ed.)
    Complex service robotics scenarios entail unpredictable task appearance both in space and time. This requires robots to continuously relocate and imposes a trade-off between motion costs and efficiency in task execution. In such scenarios, multi-robot systems and even swarms of robots can be exploited to service different areas in parallel. An efficient deployment needs to continuously determine the best allocation according to the actual service needs, while also taking relocation costs into account when such allocation must be modified. For large scale problems, centrally predicting optimal allocations and movement paths for each robot quickly becomes infeasible. Instead, decentralized solutions are needed that allow the robotic system to self-organize and adaptively respond to the task demands. In this paper, we propose a distributed and asynchronous approach to simultaneous task assignment and path planning for robot swarms, which combines a bio-inspired collective decision-making process for the allocation of robots to areas to be serviced, and a search-based path planning approach for the actual routing of robots towards tasks to be executed. Task allocation exploits a hierarchical representation of the workspace, supporting the robot deployment to the areas that mostly require service. We investigate four realistic environments of increasing complexity, where each task requires a robot to reach a location and work for a specific amount of time. The proposed approach improves over two different baseline algorithms in specific settings with statistical significance, while showing consistently good results overall. Moreover, the proposed solution is robust to limited communication and robot failures. 
    more » « less
  5. This paper presents a comprehensive disassembly sequence planning (DSP) algorithm in the human–robot collaboration (HRC) setting with consideration of several important factors including limited resources and human workers’ safety. The proposed DSP algorithm is capable of planning and distributing disassembly tasks among the human operator, the robot, and HRC, aiming to minimize the total disassembly time without violating resources and safety constraints. Regarding the resource constraints, we consider one human operator and one robot, and a limited quantity of disassembly tools. Regarding the safety constraints, we consider avoiding potential human injuries from to-be-disassembled components and possible collisions between the human operator and the robot due to the short distance between disassembly tasks. In addition, the transitions for tool changing, the moving between disassembly modules, and the precedence constraint of components to be disassembled are also considered and formulated as constraints in the problem formulation. Both numerical and experimental studies on the disassembly of a used hard disk drive (HDD) have been conducted to validate the proposed algorithm. 
    more » « less