skip to main content


Title: Human Modeling for Efficient Predictive Collision Detection
As demands on manufacturing rapidly evolve, flexible manufacturing is becoming more essential for acquiring the necessary productivity to remain competitive. An innovative approach to flexible manufacturing is the introduction of fenceless robotic manufacturing cells to acquire and leverage greater human-robot collaboration (HRC). This involves operations in which a human and a robot share a space, complete tasks together, and interact with each other. Such operations, however, pose serious safety concerns. Before HRC can become a viable possibility, robots must be capable of safely operating within and responding to events in dynamic environments. Furthermore, the robot must be able to do this quickly during online operation. This paper outlines an algorithm for predictive collision detection. This algorithm gives the robot the ability to look ahead at its trajectory, and the trajectories of other bodies in its environment and predict potential collisions. The algorithm approximates a continuous swept volume of any articulated body along its trajectory by taking only a few time sequential samples of the predicted orientations of the body and creating surfaces that patch the orientations together with Coons patches. Run time data collected on this algorithm suggest that the algorithm can accurately predict future collisions in under 30 ms.  more » « less
Award ID(s):
1830383
NSF-PAR ID:
10291770
Author(s) / Creator(s):
Date Published:
Journal Name:
UF Journal of Undergraduate Research
Volume:
22
ISSN:
2638-0668
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Robots and humans closely working together within dynamic environments must be able to continuously look ahead and identify potential collisions within their ever-changing environment. To enable the robot to act upon such situational awareness, its controller requires an iterative collision detection capability that will allow for computationally efficient Proactive Adaptive Collaboration Intelligence (PACI) to ensure safe interactions. In this paper, an algorithm is developed to evaluate a robot’s trajectory, evaluate the dynamic environment that the robot operates in, and predict collisions between the robot and dynamic obstacles in its environment. This algorithm takes as input the joint motion data of predefined robot execution plans and constructs a sweep of the robot’s instantaneous poses throughout time. The sweep models the trajectory as a point cloud containing all locations occupied by the robot and the time at which they will be occupied. To reduce the computational burden, Coons patches are leveraged to approximate the robot’s instantaneous poses. In parallel, the algorithm creates a similar sweep to model any human(s) and other obstacles being tracked in the operating environment. Overlaying temporal mapping of the sweeps reveals anticipated collisions that will occur if the robot-human do not proactively modify their motion. The algorithm is designed to feed into a segmentation and switching logic framework and provide real-time proactive-n-reactive behavior for different levels of human-robot interactions, while maintaining safety and production efficiency. To evaluate the predictive collision detection approach, multiple test cases are presented to quantify the computational speed and accuracy in predicting collisions. 
    more » « less
  2. In Human-Robot Collaboration (HRC), robots and humans must work together in shared, overlapping, workspaces to accomplish tasks. If human and robot motion can be coordinated, then collisions between robot and human can seamlessly be avoided without requiring either of them to stop work. A key part of this coordination is anticipating humans’ future motion so robot motion can be adapted proactively. In this work, a generative neural network predicts a multi-step sequence of human poses for tabletop reaching motions. The multi-step sequence is mapped to a time-series based on a human speed versus motion distance model. The input to the network is the human’s reaching target relative to current pelvis location combined with current human pose. A dataset was generated of human motions to reach various positions on or above the table in front of the human starting from a wide variety of initial human poses. After training the network, experiments showed that the predicted sequences generated by this method matched the actual recordings of human motion within an L2 joint error of 7.6 cm and L2 link roll-pitch-yaw error of 0.301 radians on average. This method predicts motion for an entire reach motion without suffering from the exponential propagation of prediction error that limits the horizon of prior works. 
    more » « less
  3. Pai Zheng (Ed.)
    Abstract

    A significant challenge in human–robot collaboration (HRC) is coordinating robot and human motions. Discoordination can lead to production delays and human discomfort. Prior works seek coordination by planning robot paths that consider humans or their anticipated occupancy as static obstacles, making them nearsighted and prone to entrapment by human motion. This work presents the spatio-temporal avoidance of predictions-prediction and planning framework (STAP-PPF) to improve robot–human coordination in HRC. STAP-PPF predicts multi-step human motion sequences based on the locations of objects the human manipulates. STAP-PPF then proactively determines time-optimal robot paths considering predicted human motion and robot speed restrictions anticipated according to the ISO15066 speed and separation monitoring (SSM) mode. When executing robot paths, STAP-PPF continuously updates human motion predictions. In real-time, STAP-PPF warps the robot’s path to account for continuously updated human motion predictions and updated SSM effects to mitigate delays and human discomfort. Results show the STAP-PPF generates robot trajectories of shorter duration. STAP-PPF robot trajectories also adapted better to real-time human motion deviation. STAP-PPF robot trajectories also maintain greater robot/human separation throughout tasks requiring close human–robot interaction. Tests with an assembly sequence demonstrate STAP-PPF’s ability to predict multi-step human tasks and plan robot motions for the sequence. STAP-PPF also most accurately estimates robot trajectory durations, within 30% of actual, which can be used to adapt the robot sequencing to minimize disruption.

     
    more » « less
  4. This paper develops a predictive collision detection algorithm for enhancing safety while respecting productivity in a Human Robot Collaborative (HRC) setting that operates on outputs from a Computer Vision (CV) environmental monitor. This prediction can trigger reactive and proactive robot action. The algorithm is designed to address two key challenges: 1) outputs from CV techniques are often highly noisy and incomplete due to occlusions and other factors, and 2) human tracking CV approaches typically provide a minimal set of points on the human. This noisy set of points must be augmented to define a high-fidelity model of the human’s predicted spatial and temporal occupancy. A filter is applied to decrease sensitivity of the algorithm to errors in the CV predictions. Kinematics of the human are leveraged to infer a full model of the human from a set of, at most, 18 points, and transform them into a point cloud occupying the swept volume of the human’s motion. This form can then quickly be compared with a compatible robot model for collision detection. Timed tests show that creation of human and robot models, and the subsequent collision check occurs in less than 30 ms on average, making this algorithm real-time capable. 
    more » « less
  5. This paper presents a comprehensive disassembly sequence planning (DSP) algorithm in the human–robot collaboration (HRC) setting with consideration of several important factors including limited resources and human workers’ safety. The proposed DSP algorithm is capable of planning and distributing disassembly tasks among the human operator, the robot, and HRC, aiming to minimize the total disassembly time without violating resources and safety constraints. Regarding the resource constraints, we consider one human operator and one robot, and a limited quantity of disassembly tools. Regarding the safety constraints, we consider avoiding potential human injuries from to-be-disassembled components and possible collisions between the human operator and the robot due to the short distance between disassembly tasks. In addition, the transitions for tool changing, the moving between disassembly modules, and the precedence constraint of components to be disassembled are also considered and formulated as constraints in the problem formulation. Both numerical and experimental studies on the disassembly of a used hard disk drive (HDD) have been conducted to validate the proposed algorithm. 
    more » « less