Objective: Accurate, non-invasive methods for estimating joint and muscle physiological states have the potential to greatly enhance control of wearable devices during real-world ambulation. Traditional modeling approaches and current estimation methods used to predict muscle dynamics often rely on complex equipment or computationally intensive simulations and have difficulty estimating across a broad spectrum of tasks or subjects. Methods: Our approach used deep learning (DL) models trained on kinematic inputs to estimate internal physiological states at the knee, including moment, power, velocity, and force. We assessed each model's performance against ground truth labels from both a commonly used, standard OpenSim musculoskeletal model without EMG (static optimization) and an EMG-informed method (CEINMS), across 28 different cyclic and noncyclic tasks. Results: EMG provided no benefit for joint moment/power estimation (e.g., biological moment), but was critical for estimating muscle states. Models trained with EMG-informed labels but without EMG as an input to the DL system significantly outperformed models trained without EMG (e.g., 33.7% improvement for muscle moment estimation) (p < 0.05). Models that included EMG-informed labels and EMG as a model input demonstrated even higher performance (49.7% improvement for muscle moment estimation) (p < 0.05), but require the availability of EMG during model deployment, which may be impractical. Conclusion/Significance: While EMG information is not necessary for estimating joint level states, there is a clear benefit during muscle level state estimation. Our results demonstrate excellent tracking of these states with EMG included only during training, highlighting the practicality of real-time deployment of this approach. 
                        more » 
                        « less   
                    
                            
                            Improving Biological Joint Moment Estimation During Real-World Tasks with EMG and Instrumented Insoles
                        
                    
    
            Objective: Real-time measurement of biological joint moment could enhance clinical assessments and generalize exoskeleton control. Accessing joint moments outside clinical and laboratory settings requires harnessing non-invasive wearable sensor data for indirect estimation. Previous approaches have been primarily validated during cyclic tasks, such as walking, but these methods are likely limited when translating to non-cyclic tasks where the mapping from kinematics to moments is not unique. Methods: We trained deep learning models to estimate hip and knee joint moments from kinematic sensors, electromyography (EMG), and simulated pressure insoles from a dataset including 10 cyclic and 18 non-cyclic activities. We assessed estimation error on combinations of sensor modalities during both activity types. Results: Compared to the kinematics-only baseline, adding EMG reduced RMSE by 16.9% at the hip and 30.4% at the knee (p<0.05) and adding insoles reduced RMSE by 21.7% at the hip and 33.9% at the knee (p<0.05). Adding both modalities reduced RMSE by 32.5% at the hip and 41.2% at the knee (p<0.05) which was significantly higher than either modality individually (p<0.05). All sensor additions improved model performance on non-cyclic tasks more than cyclic tasks (p<0.05). Conclusion: These results demonstrate that adding kinetic sensor information through EMG or insoles improves joint moment estimation both individually and jointly. These additional modalities are most important during non-cyclic tasks, tasks that reflect the variable and sporadic nature of the real-world. Significance: Improved joint moment estimation and task generalization is pivotal to developing wearable robotic systems capable of enhancing mobility in everyday life. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10510242
- Publisher / Repository:
- IEEE Transactions on Biomedical Engineering
- Date Published:
- Journal Name:
- IEEE Transactions on Biomedical Engineering
- ISSN:
- 0018-9294
- Page Range / eLocation ID:
- 1 to 10
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Research on robotic lower-limb assistive devices over the past decade has generated autonomous, multiple degree-of-freedom devices to augment human performance during a variety of scenarios. However, the increase in capabilities of these devices is met with an increase in the complexity of the overall control problem and requirement for an accurate and robust sensing modality for intent recognition. Due to its ability to precede changes in motion, surface electromyography (EMG) is widely studied as a peripheral sensing modality for capturing features of muscle activity as an input for control of powered assistive devices. In order to capture features that contribute to muscle contraction and joint motion beyond muscle activity of superficial muscles, researchers have introduced sonomyography, or real-time dynamic ultrasound imaging of skeletal muscle. However, the ability of these sonomyography features to continuously predict multiple lower-limb joint kinematics during widely varying ambulation tasks, and their potential as an input for powered multiple degree-of-freedom lower-limb assistive devices is unknown. The objective of this research is to evaluate surface EMG and sonomyography, as well as the fusion of features from both sensing modalities, as inputs to Gaussian process regression models for the continuous estimation of hip, knee and ankle angle and velocity during level walking, stair ascent/descent and ramp ascent/descent ambulation. Gaussian process regression is a Bayesian nonlinear regression model that has been introduced as an alternative to musculoskeletal model-based techniques. In this study, time-intensity features of sonomyography on both the anterior and posterior thigh along with time-domain features of surface EMG from eight muscles on the lower-limb were used to train and test subject-dependent and task-invariant Gaussian process regression models for the continuous estimation of hip, knee and ankle motion. Overall, anterior sonomyography sensor fusion with surface EMG significantly improved estimation of hip, knee and ankle motion for all ambulation tasks (level ground, stair and ramp ambulation) in comparison to surface EMG alone. Additionally, anterior sonomyography alone significantly improved errors at the hip and knee for most tasks compared to surface EMG. These findings help inform the implementation and integration of volitional control strategies for robotic assistive technologies.more » « less
- 
            Estimating human joint moments using wearable sensors has utility for personalized health monitoring and generalized exoskeleton control. Data-driven models have potential to map wearable sensor data to human joint moments, even with a reduced sensor suite and without subject-specific calibration. In this study, we quantified the RMSE and R 2 of a temporal convolutional network (TCN), trained to estimate human hip moments in the sagittal plane using exoskeleton sensor data (i.e., a hip encoder and thigh- and pelvis-mounted inertial measurement units). We conducted three analyses in which we iteratively retrained the network while: 1) varying the input sequence length of the model, 2) incorporating noncausal data into the input sequence, thus delaying the network estimates, and 3) time shifting the labels to train the model to anticipate (i.e., predict) human hip moments. We found that 930 ms of causal input data maintained model performance while minimizing input sequence length (validation RMSE and R 2 of 0.141±0.014 Nm/kg and 0.883±0.025, respectively). Further, delaying the model estimate by up to 200 ms significantly improved model performance compared to the best causal estimators (p<0.05), improving estimator fidelity in use cases where delayed estimates are acceptable (e.g., in personalized health monitoring or diagnoses). Finally, we found that anticipating hip moments further in time linearly increased model RMSE and decreased R 2 (p<0.05); however, performance remained strong (R 2 >0.85) when predicting up to 200 ms ahead.more » « less
- 
            null (Ed.)Background: Wearable technology is used by clinicians and researchers and play a critical role in biomechanical assessments and rehabilitation. Objective: The purpose of this research is to validate a soft robotic stretch (SRS) sensor embedded in a compression knee brace (smart knee brace) against a motion capture system focusing on knee joint kinematics. Methods: Sixteen participants donned the smart knee brace and completed three separate tasks: non-weight bearing knee flexion/extension, bodyweight air squats, and gait trials. Adjusted R2 for goodness of fit (R2), root mean square error (RMSE), and mean absolute error (MAE) between the SRS sensor and motion capture kinematic data for all three tasks were assessed. Results: For knee flexion/extension: R2 = 0.799, RMSE = 5.470, MAE = 4.560; for bodyweight air squats: R2 = 0.957, RMSE = 8.127, MAE = 6.870; and for gait trials: R2 = 0.565, RMSE = 9.190, MAE = 7.530 were observed. Conclusions: The smart knee brace demonstrated a higher goodness of fit and accuracy during weight-bearing air squats followed by non-weight bearing knee flexion/extension and a lower goodness of fit and accuracy during gait, which can be attributed to the SRS sensor position and orientation, rather than range of motion achieved in each task.more » « less
- 
            null (Ed.)Abstract Reinforcement learning (RL) has potential to provide innovative solutions to existing challenges in estimating joint moments in motion analysis, such as kinematic or electromyography (EMG) noise and unknown model parameters. Here, we explore feasibility of RL to assist joint moment estimation for biomechanical applications. Forearm and hand kinematics and forearm EMGs from four muscles during free finger and wrist movement were collected from six healthy subjects. Using the proximal policy optimization approach, we trained two types of RL agents that estimated joint moment based on measured kinematics or measured EMGs, respectively. To quantify the performance of trained RL agents, the estimated joint moment was used to drive a forward dynamic model for estimating kinematics, which was then compared with measured kinematics using Pearson correlation coefficient. The results demonstrated that both trained RL agents are feasible to estimate joint moment for wrist and metacarpophalangeal (MCP) joint motion prediction. The correlation coefficients between predicted and measured kinematics, derived from the kinematics-driven agent and subject-specific EMG-driven agents, were 98% ± 1% and 94% ± 3% for the wrist, respectively, and were 95% ± 2% and 84% ± 6% for the metacarpophalangeal joint, respectively. In addition, a biomechanically reasonable joint moment-angle-EMG relationship (i.e., dependence of joint moment on joint angle and EMG) was predicted using only 15 s of collected data. In conclusion, this study illustrates that an RL approach can be an alternative technique to conventional inverse dynamic analysis in human biomechanics study and EMG-driven human-machine interfacing applications.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    