skip to main content

Title: Electromyography (EMG) Signal Contributions in Speed and Slope Estimation Using Robotic Exoskeletons
Robotic exoskeletons have the capability to improve community ambulation in aging individuals. These exoskeleton controllers utilize different environmental information such as walking speeds and slope inclines to provide corresponding assistance. Several numerical approaches for estimating this environmental information have been implemented; however, they tend to be limited during dynamic changes. A possible solution is a machine learning model utilizing the user's electromyography (EMG) signals along with mechanical sensor data. We developed a neural network-based walking speed and slope estimator for a powered hip exoskeleton and explored the EMG signal contributions in both static and dynamic settings while wearing the device. We also analyzed the performance of different EMG electrode placements. The resulting machine learning model achieved error rates below 0.08 m/s RMSE and 1.3 RMSE. Our study findings from four able-bodied and two elderly subjects indicate that EMG can improve the performance by reducing the error rate by 14.8% compared to the model using only mechanical sensors. Additionally, results show that using EMG electrode configuration within the exoskeleton interface region is sufficient for the EMG model performance.  more » « less
Award ID(s):
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR)
Page Range / eLocation ID:
548 to 553
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Robotic exoskeletons can assist humans with walking by providing supplemental torque in proportion to the user's joint torque. Electromyographic (EMG) control algorithms can estimate a user's joint torque directly using real-time EMG recordings from the muscles that generate the torque. However, EMG signals change as a result of supplemental torque from an exoskeleton, resulting in unreliable estimates of the user's joint torque during active exoskeleton assistance. Here, we present an EMG control framework for robotic exoskeletons that provides consistent joint torque predictions across varying levels of assistance. Experiments with three healthy human participants showed that using diverse training data (from different levels of assistance) enables robust torque predictions, and that a convolutional neural network (CNN), but not a Kalman filter (KF), can capture the non-linear transformations in EMG due to exoskeleton assistance. With diverse training, the CNN could reliably predict joint torque from EMG during zero, low, medium, and high levels of exoskeleton assistance [root mean squared error (RMSE) below 0.096 N-m/kg]. In contrast, without diverse training, RMSE of the CNN ranged from 0.106 to 0.144 N-m/kg. RMSE of the KF ranged from 0.137 to 0.182 N-m/kg without diverse training, and did not improve with diverse training. When participant time is limited, training data should emphasize the highest levels of assistance first and utilize at least 35 full gait cycles for the CNN. The results presented here constitute an important step toward adaptive and robust human augmentation via robotic exoskeletons. This work also highlights the non-linear reorganization of locomotor output when using assistive exoskeletons; significant reductions in EMG activity were observed for the soleus and gastrocnemius, and a significant increase in EMG activity was observed for the erector spinae. Control algorithms that can accommodate spatiotemporal changes in muscle activity have broad implications for exoskeleton-based assistance and rehabilitation following neuromuscular injury. 
    more » « less
  2. Le, Khanh N.Q. (Ed.)
    In current clinical settings, typically pain is measured by a patient’s self-reported information. This subjective pain assessment results in suboptimal treatment plans, over-prescription of opioids, and drug-seeking behavior among patients. In the present study, we explored automatic objective pain intensity estimation machine learning models using inputs from physiological sensors. This study uses BioVid Heat Pain Dataset. We extracted features from Electrodermal Activity (EDA), Electrocardiogram (ECG), Electromyogram (EMG) signals collected from study participants subjected to heat pain. We built different machine learning models, including Linear Regression, Support Vector Regression (SVR), Neural Networks and Extreme Gradient Boosting for continuous value pain intensity estimation. Then we identified the physiological sensor, feature set and machine learning model that give the best predictive performance. We found that EDA is the most information-rich sensor for continuous pain intensity prediction. A set of only 3 features from EDA signals using SVR model gave an average performance of 0.93 mean absolute error (MAE) and 1.16 root means square error (RMSE) for the subject-independent model and of 0.92 MAE and 1.13 RMSE for subject-dependent. The MAE achieved with signal-feature-model combination is less than 1 unit on 0 to 4 continues pain scale, which is smaller than the MAE achieved by the methods reported in the literature. These results demonstrate that it is possible to estimate pain intensity of a patient using a computationally inexpensive machine learning model with 3 statistical features from EDA signal which can be collected from a wrist biosensor. This method paves a way to developing a wearable pain measurement device. 
    more » « less
  3. Abstract Background Few studies have systematically investigated robust controllers for lower limb rehabilitation exoskeletons (LLREs) that can safely and effectively assist users with a variety of neuromuscular disorders to walk with full autonomy. One of the key challenges for developing such a robust controller is to handle different degrees of uncertain human-exoskeleton interaction forces from the patients. Consequently, conventional walking controllers either are patient-condition specific or involve tuning of many control parameters, which could behave unreliably and even fail to maintain balance. Methods We present a novel, deep neural network, reinforcement learning-based robust controller for a LLRE based on a decoupled offline human-exoskeleton simulation training with three independent networks, which aims to provide reliable walking assistance against various and uncertain human-exoskeleton interaction forces. The exoskeleton controller is driven by a neural network control policy that acts on a stream of the LLRE’s proprioceptive signals, including joint kinematic states, and subsequently predicts real-time position control targets for the actuated joints. To handle uncertain human interaction forces, the control policy is trained intentionally with an integrated human musculoskeletal model and realistic human-exoskeleton interaction forces. Two other neural networks are connected with the control policy network to predict the interaction forces and muscle coordination. To further increase the robustness of the control policy to different human conditions, we employ domain randomization during training that includes not only randomization of exoskeleton dynamics properties but, more importantly, randomization of human muscle strength to simulate the variability of the patient’s disability. Through this decoupled deep reinforcement learning framework, the trained controller of LLREs is able to provide reliable walking assistance to patients with different degrees of neuromuscular disorders without any control parameter tuning. Results and conclusion A universal, RL-based walking controller is trained and virtually tested on a LLRE system to verify its effectiveness and robustness in assisting users with different disabilities such as passive muscles (quadriplegic), muscle weakness, or hemiplegic conditions without any control parameter tuning. Analysis of the RMSE for joint tracking, CoP-based stability, and gait symmetry shows the effectiveness of the controller. An ablation study also demonstrates the strong robustness of the control policy under large exoskeleton dynamic property ranges and various human-exoskeleton interaction forces. The decoupled network structure allows us to isolate the LLRE control policy network for testing and sim-to-real transfer since it uses only proprioception information of the LLRE (joint sensory state) as the input. Furthermore, the controller is shown to be able to handle different patient conditions without the need for patient-specific control parameter tuning. 
    more » « less
  4. Objective: In this study, we aimed to develop a novel electromyography (EMG)-based neural machine interface (NMI), called the Neural Network-Musculoskeletal hybrid Model (N2M2), to decode continuous joint angles. Our approach combines the concepts of machine learning and musculoskeletal modeling. Methods: We compared our novel design with a musculoskeletal model (MM) and 2 continuous EMG decoders based on artificial neural networks (ANNs): multilayer perceptrons (MLPs) and nonlinear autoregressive neural networks with exogenous inputs (NARX networks). EMG and joint kinematics data were collected from 10 non-disabled and 1 transradial amputee subject. The offline performance tested across 3 different conditions (i.e., varied arm postures, shifted electrode locations, and noise-contaminated EMG signals) and online performance for a virtual postural matching task was quantified. Finally, we implemented the N2M2 to operate a prosthetic hand and tested functional task performance. Results: The N2M2 made more accurate predictions than the MLP in all postures and electrode locations (p < 0.003). For estimated MCP joint angles, the N2M2 was less sensitive to noisy EMG signals than the MM or NARX network with respect to error (p < 0.032) as well as the NARX network with respect to correlation (p = 0.007). Additionally, the N2M2 had better online task performance than the NARX network (p ≤ 0.030). Conclusion: Overall, we have found that combining the concepts of machine learning and musculoskeletal modeling has resulted in a more robust joint kinematics decoder than either concept individually. Significance: The outcome of this study may result in a novel, highly reliable controller for powered prosthetic hands. 
    more » « less
  5. High-performance actuators are crucial to enable mechanical versatility of wearable robots, which are required to be lightweight, highly backdrivable, and with high bandwidth. State-of-the-art actuators, e.g., series elastic actuators (SEAs), have to compromise bandwidth to improve compliance (i.e., backdrivability). We describe the design and human-robot interaction modeling of a portable hip exoskeleton based on our custom quasi-direct drive (QDD) actuation (i.e., a high torque density motor with low ratio gear). We also present a model-based performance benchmark comparison of representative actuators in terms of torque capability, control bandwidth, backdrivability, and force tracking accuracy. This paper aims to corroborate the underlying philosophy of “design for control“, namely meticulous robot design can simplify control algorithms while ensuring high performance. Following this idea, we create a lightweight bilateral hip exoskeleton to reduce joint loadings during normal activities, including walking and squatting. Experiments indicate that the exoskeleton is able to produce high nominal torque (17.5 Nm), high backdrivability (0.4 Nm backdrive torque), high bandwidth (62.4 Hz), and high control accuracy (1.09 Nm root mean square tracking error, 5.4% of the desired peak torque). Its controller is versatile to assist walking at different speeds and squatting. This work demonstrates performance improvement compared with state-of-the-art exoskeletons. 
    more » « less