skip to main content


Title: Kinematic coordinations capture learning during human–exoskeleton interaction
Abstract

Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors.

 
more » « less
Award ID(s):
2019704
NSF-PAR ID:
10425859
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
13
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Exoskeleton robots are capable of safe torque- controlled interactions with a wearer while moving their limbs through pre-defined trajectories. However, affecting and assist- ing the wearer’s movements while incorporating their inputs (effort and movements) effectively during an interaction re- mains an open problem due to the complex and variable nature of human motion. In this paper, we present a control algorithm that leverages task-specific movement behaviors to control robot torques during unstructured interactions by implementing a force field that imposes a desired joint angle coordination behavior. This control law, built by using principal component analysis (PCA), is implemented and tested with the Harmony exoskeleton. We show that the proposed control law is versatile enough to allow for the imposition of different coordination behaviors with varying levels of impedance stiffness. We also test the feasibility of our method for unstructured human-robot interaction. Specifically, we demonstrate that participants in a human-subject experiment are able to effectively perform reaching tasks while the exoskeleton imposes the desired joint coordination under different movement speeds and interaction modes. Survey results further suggest that the proposed control law may offer a reduction in cognitive or motor effort. This control law opens up the possibility of using the exoskeleton for training the participating in accomplishing complex m 
    more » « less
  2. Abstract This paper explores the kinematic synthesis, design, and pilot experimental testing of a six-legged walking robotic platform able to traverse through different terrains. We aim to develop a structured approach to designing the limb morphology using a relaxed kinematic task with incorporated conditions on foot-environments interaction, specifically contact force direction and curvature constraints, related to maintaining contact. The design approach builds up incrementally starting with studying the basic human leg walking trajectory and then defining a “relaxed” kinematic task. The “relaxed” kinematic task consists only of two contact locations (toe-off and heel-strike) with higher-order motion task specifications compatible with foot-terrain(s) contact and curvature constraints in the vicinity of the two contacts. As the next step, an eight-bar leg image is created based on the “relaxed” kinematic task and incorporated within a six-legged walking robot. Pilot experimental tests explore if the proposed approach results in an adaptable behavior which allows the platform to incorporate different walking foot trajectories and gait styles coupled to each environment. The results suggest that the proposed “relaxed” higher-order motion task combined with the leg morphological properties and feet material allowed the platform to walk stably on the different terrains. Here we would like to note that one of the main advantages of the proposed method in comparison with other existing walking platforms is that the proposed robotic platform has carefully designed limb morphology with incorporated conditions on foot-environment interaction. Additionally, while most of the existing multilegged platforms incorporate one actuator per leg, or per joint, our goal is to explore the possibility of using a single actuator to drive all six legs of the platform. This is a critical step which opens the door for the development of future transformative technology that is largely independent of human control and able to learn about the environment through their own sensory systems. 
    more » « less
  3. Background

    In Physical Human–Robot Interaction (pHRI), the need to learn the robot’s motor-control dynamics is associated with increased cognitive load. Eye-tracking metrics can help understand the dynamics of fluctuating mental workload over the course of learning.

    Objective

    The aim of this study was to test eye-tracking measures’ sensitivity and reliability to variations in task difficulty, as well as their performance-prediction capability, in physical human–robot collaboration tasks involving an industrial robot for object comanipulation.

    Methods

    Participants (9M, 9F) learned to coperform a virtual pick-and-place task with a bimanual robot over multiple trials. Joint stiffness of the robot was manipulated to increase motor-coordination demands. The psychometric properties of eye-tracking measures and their ability to predict performance was investigated.

    Results

    Stationary Gaze Entropy and pupil diameter were the most reliable and sensitive measures of workload associated with changes in task difficulty and learning. Increased task difficulty was more likely to result in a robot-monitoring strategy. Eye-tracking measures were able to predict the occurrence of success or failure in each trial with 70% sensitivity and 71% accuracy.

    Conclusion

    The sensitivity and reliability of eye-tracking measures was acceptable, although values were lower than those observed in cognitive domains. Measures of gaze behaviors indicative of visual monitoring strategies were most sensitive to task difficulty manipulations, and should be explored further for the pHRI domain where motor-control and internal-model formation will likely be strong contributors to workload.

    Application

    Future collaborative robots can adapt to human cognitive state and skill-level measured using eye-tracking measures of workload and visual attention.

     
    more » « less
  4. Healthy human locomotion functions with good gait symmetry depend on rhythmic coordination of the left and right legs, which can be deteriorated by neurological disorders like stroke and spinal cord injury. Powered exoskeletons are promising devices to improve impaired people's locomotion functions, like gait symmetry. However, given higher uncertainties and the time-varying nature of human-robot interaction, providing personalized robotic assistance from exoskeletons to achieve the best gait symmetry is challenging, especially for people with neurological disorders. In this paper, we propose a hierarchical control framework for a bilateral hip exoskeleton to provide the adaptive optimal hip joint assistance with a control objective of imposing the desired gait symmetry during walking. Three control levels are included in the hierarchical framework, including the high-level control to tune three control parameters based on a policy iteration reinforcement learning approach, the middle-level control to define the desired assistive torque profile based on a delayed output feedback control method, and the low-level control to achieve a good torque trajectory tracking performance. To evaluate the feasibility of the proposed control framework, five healthy young participants are recruited for treadmill walking experiments, where an artificial gait asymmetry is imitated as the hemiparesis post-stroke, and only the ‘paretic’ hip joint is controlled with the proposed framework. The pilot experimental studies demonstrate that the hierarchical control framework for the hip exoskeleton successfully (asymmetry index from 8.8% to − 0.5%) and efficiently (less than 4 minutes) achieved the desired gait symmetry by providing adaptive optimal assistance on the ‘paretic’ hip joint. 
    more » « less
  5. Abstract Background Few studies have systematically investigated robust controllers for lower limb rehabilitation exoskeletons (LLREs) that can safely and effectively assist users with a variety of neuromuscular disorders to walk with full autonomy. One of the key challenges for developing such a robust controller is to handle different degrees of uncertain human-exoskeleton interaction forces from the patients. Consequently, conventional walking controllers either are patient-condition specific or involve tuning of many control parameters, which could behave unreliably and even fail to maintain balance. Methods We present a novel, deep neural network, reinforcement learning-based robust controller for a LLRE based on a decoupled offline human-exoskeleton simulation training with three independent networks, which aims to provide reliable walking assistance against various and uncertain human-exoskeleton interaction forces. The exoskeleton controller is driven by a neural network control policy that acts on a stream of the LLRE’s proprioceptive signals, including joint kinematic states, and subsequently predicts real-time position control targets for the actuated joints. To handle uncertain human interaction forces, the control policy is trained intentionally with an integrated human musculoskeletal model and realistic human-exoskeleton interaction forces. Two other neural networks are connected with the control policy network to predict the interaction forces and muscle coordination. To further increase the robustness of the control policy to different human conditions, we employ domain randomization during training that includes not only randomization of exoskeleton dynamics properties but, more importantly, randomization of human muscle strength to simulate the variability of the patient’s disability. Through this decoupled deep reinforcement learning framework, the trained controller of LLREs is able to provide reliable walking assistance to patients with different degrees of neuromuscular disorders without any control parameter tuning. Results and conclusion A universal, RL-based walking controller is trained and virtually tested on a LLRE system to verify its effectiveness and robustness in assisting users with different disabilities such as passive muscles (quadriplegic), muscle weakness, or hemiplegic conditions without any control parameter tuning. Analysis of the RMSE for joint tracking, CoP-based stability, and gait symmetry shows the effectiveness of the controller. An ablation study also demonstrates the strong robustness of the control policy under large exoskeleton dynamic property ranges and various human-exoskeleton interaction forces. The decoupled network structure allows us to isolate the LLRE control policy network for testing and sim-to-real transfer since it uses only proprioception information of the LLRE (joint sensory state) as the input. Furthermore, the controller is shown to be able to handle different patient conditions without the need for patient-specific control parameter tuning. 
    more » « less