skip to main content


Title: A Learning-based Adjustable Autonomy Framework for Human-Robot Collaboration
In this paper, an adjustable autonomy framework is proposed for Human-Robot Collaboration (HRC) in which a robot uses a reinforcement learning mechanism guided by a human operator's rewards in an initially unknown workspace. Within the proposed framework, the robot can adjust its autonomy level in an HRC setting that is represented by a Markov Decision Process. A novel Q-learning mechanism with an integrated greedy approach is implemented for robot learning to capture the correct actions and the robot's mistakes for adjusting its autonomy level. The proposed HRC framework can adapt to changes in the workspace, and can adjust the autonomy level, provided consistent human operator's reward. The developed algorithm is applied to a realistic HRC setting, involving a Baxter humanoid robot. The experimental results confirm the capability of the developed framework to successfully adjust the robot's autonomy level in response to changes in the human operator's commands or the workspace.  more » « less
Award ID(s):
1832110
NSF-PAR ID:
10326308
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Industrial Informatics
ISSN:
1551-3203
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Human-Robot Collaboration (HRC), which enables a workspace where human and robot can dynamically and safely collaborate for improved operational efficiency, has been identified as a key element in smart manu­facturing. Human action recognition plays a key role in the realization ofHRC, as it helps identify current human action and provides the basis for future action prediction and robot planning. While Deep Learning (DL) has demonstrated great potential in advancing human action recognition, effectively leveraging the temporal in­formation of human motions to improve the accuracy and robustness of action recognition has remained as a challenge. Furthermore, it is often difficult to obtain a large volume of data for DL network training and opti­mization, due to operational constraints in a realistic manufacturing setting. This paper presents an integrated method to address these two challenges, based on the optical flow and convolutional neural network (CNN)­based transfer learning. Specifically, optical flow images, which encode the temporal information of human motion, are extracted and serve as the input to a two-stream CNN structure for simultaneous parsing of spatial­-temporal information of human motion. Subsequently, transfer learning is investigated to transfer the feature extraction capability of a pre-trained CNN to manufacturing scenarios. Evaluation using engine block assembly confirmed the effectiveness of the developed method. 
    more » « less
  2. This paper presents the design of a wearable robotic forearm for close-range human-robot collaboration. The robot's function is to serve as a lightweight supernumerary third arm for shared workspace activities. We present a functional prototype resulting from an iterative design process including several user studies. An analysis of the robot's kinematics shows an increase in reachable workspace by 246 % compared to the natural human reach. The robot's degrees of freedom and range of motion support a variety of usage scenarios with the robot as a collaborative tool, including self-handovers, fetching objects while the human's hands are occupied, assisting human-human collaboration, and stabilizing an object. We analyze the bio-mechanical loads for these scenarios and find that the design is able to operate within human ergonomic wear limits. We then report on a pilot human-robot interaction study that indicates robot autonomy is more task-time efficient and preferred by users when compared to direct voice-control. These results suggest that the design presented here is a promising configuration for a lightweight wearable robotic augmentation device, and can serve as a basis for further research into human-wearable collaboration. 
    more » « less
  3. Human-Robot Collaboration (HRC), which envisions a workspace in which human and robot can dynamically collaborate, has been identified as a key element in smart manufacturing. Human action recognition plays a key role in the realization of HRC as it helps identify current human action and provides the basis for future action prediction and robot planning. Despite recent development of Deep Learning (DL) that has demonstrated great potential in advancing human action recognition, one of the key issues remains as how to effectively leverage the temporal information of human motion to improve the performance of action recognition. Furthermore, large volume of training data is often difficult to obtain due to manufacturing constraints, which poses challenge for the optimization of DL models. This paper presents an integrated method based on optical flow and convolutional neural network (CNN)-based transfer learning to tackle these two issues. First, optical flow images, which encode the temporal information of human motion, are extracted and serve as the input to a two-stream CNN structure for simultaneous parsing of spatial-temporal information of human motion. Then, transfer learning is investigated to transfer the feature extraction capability of a pretrained CNN to manufacturing scenarios. Evaluation using engine block assembly confirmed the effectiveness of the developed method. 
    more » « less
  4. null (Ed.)
    Continuum robots have high degrees of freedom and the ability to safely move in constrained environments. One class of soft continuum robot is the “vine” robot. This type of robot extends from its tip by everting or unfurling new material, driven by internal body pressure. Most vine robot examples store new body material in a reel at their base, passing it through the core of the robot to the tip, and like many continuum robots, steer by selectively lengthening or shortening one side of the body. While this approach to steering and material storage lends itself to a fully soft device, it has three key limitations: (i) internal friction of material passing through the core of the robot limits its length in tortuous paths, (ii) body buckling as the robot's body material is re-spooled at the base can prevent retraction, and (iii) constant curvature steering limits the robot's poses and object approach angles in a given workspace. This letter presents a hybrid soft-rigid robotic system comprising a soft vine robot body and a rigid, mobile, internal steering-reeling mechanism (SRM); this SRM is equipped with a reel for material storage, a bending actuator for steering, and is capable of actuating the robot at any point along its length. This hybrid configuration increases reach along tortuous paths, allows retraction, and increases the workspace. We describe the motivation for the device, generate its mathematical models, present its methods of operation, and verify experimentally the models we developed and the performance improvements over previous vine robots. 
    more » « less
  5. Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework. 
    more » « less