skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Spatio-Temporal Avoidance of Predicted Occupancy in Human-Robot Collaboration
This paper addresses human-robot collaboration (HRC) challenges of integrating predictions of human activity to provide a proactive-n-reactive response capability for the robot. Prior works that consider current or predicted human poses as static obstacles are too nearsighted or too conservative in planning, potentially causing delayed robot paths. Alternatively, time-varying prediction of human poses would enable robot paths that avoid anticipated human poses, synchronized dynamically in time and space. Herein, a proactive path planning method, denoted STAP, is presented that uses spatiotemporal human occupancy maps to find robot trajectories that anticipate human movements, allowing robot passage without stopping. In addition, STAP anticipates delays from robot speed restrictions required by ISO/TS 15066 speed and separation monitoring (SSM). STAP also proposes a sampling-based planning algorithm based on RRT* to solve the spatio-temporal motion planning problem and find paths of minimum expected duration. Experimental results show STAP generates paths of shorter duration and greater average robot-human separation distance throughout tasks. Additionally, STAP more accurately estimates robot trajectory durations in HRC, which are useful in arriving at proactive-n-reactive robot sequencing.  more » « less
Award ID(s):
1830383
PAR ID:
10483557
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE ROMAN
ISSN:
1944-9445
ISBN:
979-8-3503-3670-2
Page Range / eLocation ID:
2162 to 2168
Subject(s) / Keyword(s):
Sequential analysis Collaboration Production Trajectory Planning Delays Spatiotemporal phenomena
Format(s):
Medium: X
Location:
Busan, Korea, Republic of
Sponsoring Org:
National Science Foundation
More Like this
  1. Pai Zheng (Ed.)
    Abstract A significant challenge in human–robot collaboration (HRC) is coordinating robot and human motions. Discoordination can lead to production delays and human discomfort. Prior works seek coordination by planning robot paths that consider humans or their anticipated occupancy as static obstacles, making them nearsighted and prone to entrapment by human motion. This work presents the spatio-temporal avoidance of predictions-prediction and planning framework (STAP-PPF) to improve robot–human coordination in HRC. STAP-PPF predicts multi-step human motion sequences based on the locations of objects the human manipulates. STAP-PPF then proactively determines time-optimal robot paths considering predicted human motion and robot speed restrictions anticipated according to the ISO15066 speed and separation monitoring (SSM) mode. When executing robot paths, STAP-PPF continuously updates human motion predictions. In real-time, STAP-PPF warps the robot’s path to account for continuously updated human motion predictions and updated SSM effects to mitigate delays and human discomfort. Results show the STAP-PPF generates robot trajectories of shorter duration. STAP-PPF robot trajectories also adapted better to real-time human motion deviation. STAP-PPF robot trajectories also maintain greater robot/human separation throughout tasks requiring close human–robot interaction. Tests with an assembly sequence demonstrate STAP-PPF’s ability to predict multi-step human tasks and plan robot motions for the sequence. STAP-PPF also most accurately estimates robot trajectory durations, within 30% of actual, which can be used to adapt the robot sequencing to minimize disruption. 
    more » « less
  2. Robots and humans closely working together within dynamic environments must be able to continuously look ahead and identify potential collisions within their ever-changing environment. To enable the robot to act upon such situational awareness, its controller requires an iterative collision detection capability that will allow for computationally efficient Proactive Adaptive Collaboration Intelligence (PACI) to ensure safe interactions. In this paper, an algorithm is developed to evaluate a robot’s trajectory, evaluate the dynamic environment that the robot operates in, and predict collisions between the robot and dynamic obstacles in its environment. This algorithm takes as input the joint motion data of predefined robot execution plans and constructs a sweep of the robot’s instantaneous poses throughout time. The sweep models the trajectory as a point cloud containing all locations occupied by the robot and the time at which they will be occupied. To reduce the computational burden, Coons patches are leveraged to approximate the robot’s instantaneous poses. In parallel, the algorithm creates a similar sweep to model any human(s) and other obstacles being tracked in the operating environment. Overlaying temporal mapping of the sweeps reveals anticipated collisions that will occur if the robot-human do not proactively modify their motion. The algorithm is designed to feed into a segmentation and switching logic framework and provide real-time proactive-n-reactive behavior for different levels of human-robot interactions, while maintaining safety and production efficiency. To evaluate the predictive collision detection approach, multiple test cases are presented to quantify the computational speed and accuracy in predicting collisions. 
    more » « less
  3. Human-Robot Collaboration (HRC), which envisions a workspace in which human and robot can dynamically collaborate, has been identified as a key element in smart manufacturing. Human action recognition plays a key role in the realization of HRC as it helps identify current human action and provides the basis for future action prediction and robot planning. Despite recent development of Deep Learning (DL) that has demonstrated great potential in advancing human action recognition, one of the key issues remains as how to effectively leverage the temporal information of human motion to improve the performance of action recognition. Furthermore, large volume of training data is often difficult to obtain due to manufacturing constraints, which poses challenge for the optimization of DL models. This paper presents an integrated method based on optical flow and convolutional neural network (CNN)-based transfer learning to tackle these two issues. First, optical flow images, which encode the temporal information of human motion, are extracted and serve as the input to a two-stream CNN structure for simultaneous parsing of spatial-temporal information of human motion. Then, transfer learning is investigated to transfer the feature extraction capability of a pretrained CNN to manufacturing scenarios. Evaluation using engine block assembly confirmed the effectiveness of the developed method. 
    more » « less
  4. In Human-Robot Collaboration (HRC), robots and humans must work together in shared, overlapping, workspaces to accomplish tasks. If human and robot motion can be coordinated, then collisions between robot and human can seamlessly be avoided without requiring either of them to stop work. A key part of this coordination is anticipating humans’ future motion so robot motion can be adapted proactively. In this work, a generative neural network predicts a multi-step sequence of human poses for tabletop reaching motions. The multi-step sequence is mapped to a time-series based on a human speed versus motion distance model. The input to the network is the human’s reaching target relative to current pelvis location combined with current human pose. A dataset was generated of human motions to reach various positions on or above the table in front of the human starting from a wide variety of initial human poses. After training the network, experiments showed that the predicted sequences generated by this method matched the actual recordings of human motion within an L2 joint error of 7.6 cm and L2 link roll-pitch-yaw error of 0.301 radians on average. This method predicts motion for an entire reach motion without suffering from the exponential propagation of prediction error that limits the horizon of prior works. 
    more » « less
  5. null (Ed.)
    Abstract To enable safe and effective human–robot collaboration (HRC) in smart manufacturing, seamless integration of sensing, cognition, and prediction into the robot controller is critical for real-time awareness, response, and communication inside a heterogeneous environment (robots, humans, and equipment). The specific research objective is to provide the robot Proactive Adaptive Collaboration Intelligence (PACI) and switching logic within its control architecture in order to give the robot the ability to optimally and dynamically adapt its motions, given a priori knowledge and predefined execution plans for its assigned tasks. The challenge lies in augmenting the robot’s decision-making process to have greater situation awareness and to yield smart robot behaviors/reactions when subject to different levels of human–robot interaction, while maintaining safety and production efficiency. Robot reactive behaviors were achieved via cost function-based switching logic activating the best suited high-level controller. The PACI’s underlying segmentation and switching logic framework is demonstrated to yield a high degree of modularity and flexibility. The performance of the developed control structure subjected to different levels of human–robot interactions was validated in a simulated environment. Open-loop commands were sent to the physical e.DO robot to demonstrate how the proposed framework would behave in a real application. 
    more » « less