skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Adaptive Compensation for Robotic Joint Failures Using Partially Observable Reinforcement Learning
Robotic manipulators are widely used in various industries for complex and repetitive tasks. However, they remain vulnerable to unexpected hardware failures. In this study, we address the challenge of enabling a robotic manipulator to complete tasks despite joint malfunctions. Specifically, we develop a reinforcement learning (RL) framework to adaptively compensate for a nonfunctional joint during task execution. Our experimental platform is the Franka robot with seven degrees of freedom (DOFs). We formulate the problem as a partially observable Markov decision process (POMDP), where the robot is trained under various joint failure conditions and tested in both seen and unseen scenarios. We consider scenarios where a joint is permanently broken and where it functions intermittently. Additionally, we demonstrate the effectiveness of our approach by comparing it with traditional inverse kinematics-based control methods. The results show that the RL algorithm enables the robot to successfully complete tasks even with joint failures, achieving a high success rate with an average rate of 93.6%. This showcases its robustness and adaptability. Our findings highlight the potential of RL to enhance the resilience and reliability of robotic systems, making them better suited for unpredictable environments.  more » « less
Award ID(s):
2138206
PAR ID:
10637602
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Algorithms
Volume:
17
Issue:
10
ISSN:
1999-4893
Page Range / eLocation ID:
436
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Mobile robot navigation is a critical aspect of robotics, with applications spanning from service robots to industrial automation. However, navigating in complex and dynamic environments poses many challenges, such as avoiding obstacles, making decisions in real-time, and adapting to new situations. Reinforcement Learning (RL) has emerged as a promising approach to enable robots to learn navigation policies from their interactions with the environment. However, application of RL methods to real-world tasks such as mobile robot navigation, and evaluating their performance under various training–testing settings has not been sufficiently researched. In this paper, we have designed an evaluation framework that investigates the RL algorithm’s generalization capability in regard to unseen scenarios in terms of learning convergence and success rates by transferring learned policies in simulation to physical environments. To achieve this, we designed a simulated environment in Gazebo for training the robot over a high number of episodes. The training environment closely mimics the typical indoor scenarios that a mobile robot can encounter, replicating real-world challenges. For evaluation, we designed physical environments with and without unforeseen indoor scenarios. This evaluation framework outputs statistical metrics, which we then use to conduct an extensive study on a deep RL method, namely the proximal policy optimization (PPO). The results provide valuable insights into the strengths and limitations of the method for mobile robot navigation. Our experiments demonstrate that the trained model from simulations can be deployed to the previously unseen physical world with a success rate of over 88%. The insights gained from our study can assist practitioners and researchers in selecting suitable RL approaches and training–testing settings for their specific robotic navigation tasks. 
    more » « less
  2. As robots are increasingly used in remote, safety-critical, and hazardous applications, the reliability of robots is becoming more important than ever before. Robotic arm joint motor-drive systems are vulnerable to hardware failures due to harsh operating environment in many scenarios, which may yield various joint failures and result in significant downtime costs. Targeting the most common robotic joint brushless DC (BLDC) motor-drive systems, this paper proposes a robust online diagnostic method for semiconductor faults for BLDC motor drives. The proposed fault diagnostic technique is based on the stator current signature analysis. Specifically, this paper investigates the performance of the BLDC joint motors under open-circuit faults of the inverter switches using finite element co-simulation tools. Furthermore, the proposed methodology is not only capable of detecting any open-circuit faults but also identifying faulty switches based on a knowledge table by considering various fault conditions. The robustness of the proposed technique was verified through extensive simulations under different speed and load conditions. Moreover, simulations have been carried out on a Kinova Gen-3 robot arm to verify the theoretical findings, highlighting the impacts of locked joints on the robot’s end-effector locations. Finally, experimental results are presented to corroborate the performance of the proposed fault diagnostic strategy. 
    more » « less
  3. Aleksandra Faust, David Hsu (Ed.)
    Modern Reinforcement Learning (RL) algorithms are not sample efficient to train on multi-step tasks in complex domains, impeding their wider deployment in the real world. We address this problem by leveraging the insight that RL models trained to complete one set of tasks can be repurposed to complete related tasks when given just a handful of demonstrations. Based upon this insight, we propose See-SPOT-Run (SSR), a new computational approach to robot learning that enables a robot to complete a variety of real robot tasks in novel problem domains without task-specific training. SSR uses pretrained RL models to create vectors that represent model, task, and action relevance in demonstration and test scenes. SSR then compares these vectors via our Cycle Consistency Distance (CCD) metric to determine the next action to take. SSR completes 58% more task steps and 20% more trials than a baseline few-shot learning method that requires task-specific training. SSR also achieves a four order of magnitude improvement in compute efficiency and a 20% to three order of magnitude improvement in sample efficiency compared to the baseline and to training RL models from scratch. To our knowledge, we are the first to address multi-step tasks from demonstration on a real robot without task-specific training, where both the visual input and action space output are high dimensional. Code is available in the supplement. 
    more » « less
  4. Using robots capable of collaboration with humans to complete physical tasks in unstructured spaces is a rapidly growing approach to work. Particular examples where increased levels of automation can increase productivity include robots used as nursing assistants. In this paper, we present a mobile manipulator designed to serve as an assistant to nurses in patient walking and patient sitting tasks in hospital environments. The Adaptive Robotic Nursing Assistant (ARNA) robot consists of an omnidirectional base with an instrumented handlebar, and a 7-DOF robotic arm. We describe its components and the novelties in its mechanisms and instrumentation. Experiments with human subjects that gauge the usability and ease of use of the ARNA robot in a medical environment indicate that the robot will get significant actual usage, and are used as a basis for a discussion on how the robot's features facilitate its adaptability for use in other scenarios and environment. 
    more » « less
  5. Despite the potential of reinforcement learning (RL) for building general-purpose robotic systems, training RL agents to solve robotics tasks still remains challenging due to the difficulty of exploration in purely continuous action spaces. Addressing this problem is an active area of research with the majority of focus on improving RL methods via better optimization or more efficient exploration. An alternate but important component to consider improving is the interface of the RL algorithm with the robot. In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy. These parameterized primitives are expressive, simple to implement, enable efficient exploration and can be transferred across robots, tasks and environments. We perform a thorough empirical study across challenging tasks in three distinct domains with image input and a sparse terminal reward. We find that our simple change to the action interface substantially improves both the learning efficiency and task performance irrespective of the underlying RL algorithm, significantly outperforming prior methods which learn skills from offline expert data. Code and videos at https://mihdalal.github.io/raps/ 
    more » « less