skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Flowers, Jared"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Pai Zheng (Ed.)
    Abstract A significant challenge in human–robot collaboration (HRC) is coordinating robot and human motions. Discoordination can lead to production delays and human discomfort. Prior works seek coordination by planning robot paths that consider humans or their anticipated occupancy as static obstacles, making them nearsighted and prone to entrapment by human motion. This work presents the spatio-temporal avoidance of predictions-prediction and planning framework (STAP-PPF) to improve robot–human coordination in HRC. STAP-PPF predicts multi-step human motion sequences based on the locations of objects the human manipulates. STAP-PPF then proactively determines time-optimal robot paths considering predicted human motion and robot speed restrictions anticipated according to the ISO15066 speed and separation monitoring (SSM) mode. When executing robot paths, STAP-PPF continuously updates human motion predictions. In real-time, STAP-PPF warps the robot’s path to account for continuously updated human motion predictions and updated SSM effects to mitigate delays and human discomfort. Results show the STAP-PPF generates robot trajectories of shorter duration. STAP-PPF robot trajectories also adapted better to real-time human motion deviation. STAP-PPF robot trajectories also maintain greater robot/human separation throughout tasks requiring close human–robot interaction. Tests with an assembly sequence demonstrate STAP-PPF’s ability to predict multi-step human tasks and plan robot motions for the sequence. STAP-PPF also most accurately estimates robot trajectory durations, within 30% of actual, which can be used to adapt the robot sequencing to minimize disruption. 
    more » « less
  2. In Human-Robot Collaboration (HRC), robots and humans must work together in shared, overlapping, workspaces to accomplish tasks. If human and robot motion can be coordinated, then collisions between robot and human can seamlessly be avoided without requiring either of them to stop work. A key part of this coordination is anticipating humans’ future motion so robot motion can be adapted proactively. In this work, a generative neural network predicts a multi-step sequence of human poses for tabletop reaching motions. The multi-step sequence is mapped to a time-series based on a human speed versus motion distance model. The input to the network is the human’s reaching target relative to current pelvis location combined with current human pose. A dataset was generated of human motions to reach various positions on or above the table in front of the human starting from a wide variety of initial human poses. After training the network, experiments showed that the predicted sequences generated by this method matched the actual recordings of human motion within an L2 joint error of 7.6 cm and L2 link roll-pitch-yaw error of 0.301 radians on average. This method predicts motion for an entire reach motion without suffering from the exponential propagation of prediction error that limits the horizon of prior works. 
    more » « less
  3. This paper addresses human-robot collaboration (HRC) challenges of integrating predictions of human activity to provide a proactive-n-reactive response capability for the robot. Prior works that consider current or predicted human poses as static obstacles are too nearsighted or too conservative in planning, potentially causing delayed robot paths. Alternatively, time-varying prediction of human poses would enable robot paths that avoid anticipated human poses, synchronized dynamically in time and space. Herein, a proactive path planning method, denoted STAP, is presented that uses spatiotemporal human occupancy maps to find robot trajectories that anticipate human movements, allowing robot passage without stopping. In addition, STAP anticipates delays from robot speed restrictions required by ISO/TS 15066 speed and separation monitoring (SSM). STAP also proposes a sampling-based planning algorithm based on RRT* to solve the spatio-temporal motion planning problem and find paths of minimum expected duration. Experimental results show STAP generates paths of shorter duration and greater average robot-human separation distance throughout tasks. Additionally, STAP more accurately estimates robot trajectory durations in HRC, which are useful in arriving at proactive-n-reactive robot sequencing. 
    more » « less
  4. This paper develops a predictive collision detection algorithm for enhancing safety while respecting productivity in a Human Robot Collaborative (HRC) setting that operates on outputs from a Computer Vision (CV) environmental monitor. This prediction can trigger reactive and proactive robot action. The algorithm is designed to address two key challenges: 1) outputs from CV techniques are often highly noisy and incomplete due to occlusions and other factors, and 2) human tracking CV approaches typically provide a minimal set of points on the human. This noisy set of points must be augmented to define a high-fidelity model of the human’s predicted spatial and temporal occupancy. A filter is applied to decrease sensitivity of the algorithm to errors in the CV predictions. Kinematics of the human are leveraged to infer a full model of the human from a set of, at most, 18 points, and transform them into a point cloud occupying the swept volume of the human’s motion. This form can then quickly be compared with a compatible robot model for collision detection. Timed tests show that creation of human and robot models, and the subsequent collision check occurs in less than 30 ms on average, making this algorithm real-time capable. 
    more » « less
  5. null (Ed.)
    Industry 4.0 projects ubiquitous collaborative robots in smart factories of the future, particularly in assembly and material handling. To ensure efficient and safe human-robot collaborative interactions, this paper presents a novel algorithm for estimating Risk of Passage (ROP) a robot incurs by passing between dynamic obstacles (humans, moving equipment, etc.). This paper posits that robot trajectory durations will be shorter and safer if the robot can react proactively to predicted collision between a robot and human worker before it occurs, compared to reacting when it is imminent. I.e., if the risk that obstacles may prohibit robot passage at a future time in the robot’s trajectory is greater than a user defined risk limit, then an Obstacle Pair Volume (OPV), encompassing the obstacles at that time, is added to the planning scene. Results found from simulation show that an ROP algorithm can be trained in ∼120 workcell cycles. Further, it is demonstrated that when a trained ROP algorithm introduces an OPV, trajectory durations are shorter compared to those avoiding obstacles without the introduction of an OPV. The use of ROP estimation with addition of OPV allows workcells to operate proactively smoother with shorter cycle times in the presence of unforeseen obstacles. 
    more » « less