skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Robust and Reliable Climbing Robot for Steel Structure Inspection
This paper presents a new, robust and reliable robot capable of carrying heavy equipment loads without sacrificing mobility that can improve the safety and detail of steel inspections in difficult access areas. In addition, the robot functions with an embedded NORTEC 600, eddy current sensor, and a GoPro camera that allows it to conduct nondestructive evaluation and collect high-resolution imagery data of steel structures. The data is processed into a heatmap for quick and easy interpretation by the user. In order to verify the robot’s designed capabilities, a set of mechanical analyses were performed to quantify the designed robot’s limits and failure mechanics. The application of our robot would increase the safety of an inspector by reducing the frequency they would need to hang underneath a bridge or travel along a narrow section. Demonstration of the robot deployments can be seen in this link: https://youtu.be/8d78d7CWXYk  more » « less
Award ID(s):
1846513 1919127
PAR ID:
10318772
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2022 IEEE/SICE International Symposium on System Integration (SII)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Shared autonomy provides a framework where a human and an automated system, such as a robot, jointly control the system’s behavior, enabling an effective solution for various applications, including human-robot interaction and remote operation of a semi-autonomous system. However, a challenging problem in shared autonomy is safety because the human input may be unknown and unpredictable, which affects the robot’s safety constraints. If the human input is a force applied through physical contact with the robot, it also alters the robot’s behavior to maintain safety. We address the safety issue of shared autonomy in real-time applications by proposing a two-layer control framework. In the first layer, we use the history of human input measurements to infer what the human wants the robot to do and define the robot’s safety constraints according to that inference. In the second layer, we formulate a rapidly-exploring random tree of barrier pairs, with each barrier pair composed of a barrier function and a controller. Using the controllers in these barrier pairs, the robot is able to maintain its safe operation under the intervention from the human input. This proposed control framework allows the robot to assist the human while preventing them from encountering safety issues. We demonstrate the proposed control framework on a simulation of a two-linkage manipulator robot. 
    more » « less
  2. Robots and humans closely working together within dynamic environments must be able to continuously look ahead and identify potential collisions within their ever-changing environment. To enable the robot to act upon such situational awareness, its controller requires an iterative collision detection capability that will allow for computationally efficient Proactive Adaptive Collaboration Intelligence (PACI) to ensure safe interactions. In this paper, an algorithm is developed to evaluate a robot’s trajectory, evaluate the dynamic environment that the robot operates in, and predict collisions between the robot and dynamic obstacles in its environment. This algorithm takes as input the joint motion data of predefined robot execution plans and constructs a sweep of the robot’s instantaneous poses throughout time. The sweep models the trajectory as a point cloud containing all locations occupied by the robot and the time at which they will be occupied. To reduce the computational burden, Coons patches are leveraged to approximate the robot’s instantaneous poses. In parallel, the algorithm creates a similar sweep to model any human(s) and other obstacles being tracked in the operating environment. Overlaying temporal mapping of the sweeps reveals anticipated collisions that will occur if the robot-human do not proactively modify their motion. The algorithm is designed to feed into a segmentation and switching logic framework and provide real-time proactive-n-reactive behavior for different levels of human-robot interactions, while maintaining safety and production efficiency. To evaluate the predictive collision detection approach, multiple test cases are presented to quantify the computational speed and accuracy in predicting collisions. 
    more » « less
  3. While human safety is always a concern in an environment with human-robot collaboration, this concern magnifies when it is the human-robot work-space that overlaps. This overlap creates potential for collision which would reduce the safety of the system. Fear of such a collision could reduce the productivity of the system. This apprehensiveness is referred to as the perceived safety of the robot by the human. Therefore, we designed a within-subject human-robot collaboration experiment where a human and a robot work together in an assembling task. In order to evaluate the perceived safety during this HRC task, we collected subjective data by means of a questionnaire through two methods: during and after trial. The collected data was analyzed using non-parametric methods and these statistical tests were conducted: Friedman and Wilcoxon. The most clear relationship was found when changing only sensitivity of the robot or all three behaviors of velocity, trajectory, and sensitivity. There is a positive moderate linear relationship between the average safety of the during trial data and the after trial data. 
    more » « less
  4. Zero-day vulnerabilities pose a significant challenge to robot cyber-physical systems (CPS). Attackers can exploit software vulnerabilities in widely-used robotics software, such as the Robot Operating System (ROS), to manipulate robot behavior, compromising both safety and operational effectiveness. The hidden nature of these vulnerabilities requires strong defense mechanisms to guarantee the safety and dependability of robotic systems. In this paper, we introduce ROBOCOP, a cyber-physical attack detection framework designed to protect robots from zero-day threats. ROBOCOP leverages static software features in the pre-execution analysis along with runtime state monitoring to identify attack patterns and deviations that signal attacks, thus ensuring the robot’s operational integrity. We evaluated ROBOCOP on the F1-tenth autonomous car platform. It achieves a 93% detection accuracy against a variety of zero-day attacks targeting sensors, actuators, and controller logic. Importantly, in on-robot deployments, it identifies attacks in less than 7 seconds with a 12% computational overhead. 
    more » « less
  5. Goal-conditioned policies, such as those learned via imitation learning, provide an easy way for humans to influence what tasks robots accomplish. However, these robot policies are not guaranteed to execute safely or to succeed when faced with out-of-distribution goal requests. In this work, we enable robots to know when they can confidently execute a user’s desired goal, and automatically suggest safe alternatives when they cannot. Our approach is inspired by control-theoretic safety filtering, wherein a safety filter minimally adjusts a robot’s candidate action to be safe. Our key idea is to pose alternative suggestion as a safe control problem in goal space, rather than in action space. Offline, we use reachability analysis to compute a goal-parameterized reach-avoid value network which quantifies the safety and liveness of the robot’s pre- trained policy. Online, our robot uses the reach-avoid value network as a safety filter, monitoring the human’s given goal and actively suggesting alternatives that are similar but meet the safety specification. We demonstrate our Safe ALTernatives (SALT) framework in simulation experiments with indoor navigation and Franka Panda tabletop manipulation, and with both discrete and continuous goal representations. We find that SALT is able to learn to predict successful and failed closed-loop executions, is a less pessimistic monitor than open- loop uncertainty quantification, and proposes alternatives that consistently align with those that people find acceptable. 
    more » « less