- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- Journal of Manufacturing Science and Engineering
- Sponsoring Org:
- National Science Foundation
More Like this
To enable safe and effective human-robot collaboration (HRC) in smart manufacturing, seamless integration of sensing, cognition and prediction into the robot controller is critical for real-time awareness, response and communication inside a heterogeneous environment (robots, humans, equipment). The specific research objective is to provide the robot Proactive Adaptive Collaboration Intelligence (PACI) and switching logic within its control architecture in order to give the robot the ability to optimally and dynamically adapt its motions, given a priori knowledge and predefined execution plans for its assigned tasks. The challenge lies in augmenting the robot’s decision-making process to have greater situation awareness and to yield smart robot behaviors/reactions when subject to different levels of human-robot interaction, while maintaining safety and production efficiency. Robot reactive behaviors were achieved via cost function-based switching logic activating the best suited high-level controller. The PACI’s underlying segmentation and switching logic framework is demonstrated to yield a high degree of modularity and flexibility. The performance of the developed control structure subjected to different levels of human-robot interactions was validated in a simulated environment. Open-loop commands were sent to the physical e.DO robot to demonstrate how the proposed framework would behave in a real application.
Human-Robot Collaboration: A Predictive Collision Detection Approach for Operation Within Dynamic EnvironmentsRobots and humans closely working together within dynamic environments must be able to continuously look ahead and identify potential collisions within their ever-changing environment. To enable the robot to act upon such situational awareness, its controller requires an iterative collision detection capability that will allow for computationally efficient Proactive Adaptive Collaboration Intelligence (PACI) to ensure safe interactions. In this paper, an algorithm is developed to evaluate a robot’s trajectory, evaluate the dynamic environment that the robot operates in, and predict collisions between the robot and dynamic obstacles in its environment. This algorithm takes as input the joint motion data of predefined robot execution plans and constructs a sweep of the robot’s instantaneous poses throughout time. The sweep models the trajectory as a point cloud containing all locations occupied by the robot and the time at which they will be occupied. To reduce the computational burden, Coons patches are leveraged to approximate the robot’s instantaneous poses. In parallel, the algorithm creates a similar sweep to model any human(s) and other obstacles being tracked in the operating environment. Overlaying temporal mapping of the sweeps reveals anticipated collisions that will occur if the robot-human do not proactively modify their motion. The algorithm ismore »
Reactive task and motion planning for robust whole-body dynamic locomotion in constrained environments
Contact-based decision and planning methods are becoming increasingly important to endow higher levels of autonomy for legged robots. Formal synthesis methods derived from symbolic systems have great potential for reasoning about high-level locomotion decisions and achieving complex maneuvering behaviors with correctness guarantees. This study takes a first step toward formally devising an architecture composed of task planning and control of whole-body dynamic locomotion behaviors in constrained and dynamically changing environments. At the high level, we formulate a two-player temporal logic game between the multi-limb locomotion planner and its dynamic environment to synthesize a winning strategy that delivers symbolic locomotion actions. These locomotion actions satisfy the desired high-level task specifications expressed in a fragment of temporal logic. Those actions are sent to a robust finite transition system that synthesizes a locomotion controller that fulfills state reachability constraints. This controller is further executed via a low-level motion planner that generates feasible locomotion trajectories. We construct a set of dynamic locomotion models for legged robots to serve as a template library for handling diverse environmental events. We devise a replanning strategy that takes into consideration sudden environmental changes or large state disturbances to increase the robustness of the resulting locomotion behaviors. We formallymore »
Autonomous navigation of steel bridge inspection robots are essential for proper maintenance. Majority of existing robotic solutions for bridge inspection require human intervention to assist in the control and navigation. In this paper, a control system framework has been proposed for a previously designed ARA robot , which facilitates autonomous real-time navigation and minimizes human involvement. The mechanical design and control framework of ARA robot enables two different configurations, namely the mobile and inch-worm transformation. In addition, a switching control was developed with 3D point clouds of steel surfaces as the input which allows the robot to switch between mobile and inch-worm transformation. The surface availability algorithm (considers plane, area and height) of the switching control enables the robot to perform inch-worm jumps autonomously. The mobile transformation allows the robot to move on continuous steel surfaces and perform visual inspection of steel bridge structures. Practical experiments on actual steel bridge structures highlight the effective performance of ARA robot with the proposed control framework for autonomous navigation during visual inspection of steel bridges.
Shared autonomy provides a framework where a human and an automated system, such as a robot, jointly control the system’s behavior, enabling an effective solution for various applications, including human-robot interaction and remote operation of a semi-autonomous system. However, a challenging problem in shared autonomy is safety because the human input may be unknown and unpredictable, which affects the robot’s safety constraints. If the human input is a force applied through physical contact with the robot, it also alters the robot’s behavior to maintain safety. We address the safety issue of shared autonomy in real-time applications by proposing a two-layer control framework. In the first layer, we use the history of human input measurements to infer what the human wants the robot to do and define the robot’s safety constraints according to that inference. In the second layer, we formulate a rapidly-exploring random tree of barrier pairs, with each barrier pair composed of a barrier function and a controller. Using the controllers in these barrier pairs, the robot is able to maintain its safe operation under the intervention from the human input. This proposed control framework allows the robot to assist the human while preventing them from encountering safety issues.more »