skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Upper Limb Joint Angle Estimation Using a Reduced Number of IMU Sensors and Recurrent Neural Networks
Accurate estimation of upper-limb joint angles is essential in biomechanics, rehabilitation, and wearable robotics. While inertial measurement units (IMUs) offer portability and flexibility, systems requiring multiple inertial sensors can be intrusive and complex to deploy. In contrast, optical motion capture (MoCap) systems provide precise tracking but are constrained to controlled laboratory environments. This study presents a deep learning-based approach for estimating shoulder and elbow joint angles using only three IMU sensors positioned on the chest and both wrists, validated against reference angles obtained from a MoCap system. The input data includes Euler angles, accelerometer, and gyroscope data, synchronized and segmented into sliding windows. Two recurrent neural network architectures, Convolutional Neural Network with Long-short Term Memory (CNN-LSTM) and Bidirectional LSTM (BLSTM), were trained and evaluated using identical conditions. The CNN component enabled the LSTM to extract spatial features that enhance sequential pattern learning, improving angle reconstruction. Both models achieved accurate estimation performance: CNN-LSTM yielded lower Mean Absolute Error (MAE) in smooth trajectories, while BLSTM provided smoother predictions but underestimated some peak movements, especially in the primary axes of rotation. These findings support the development of scalable, deep learning-based wearable systems and contribute to future applications in clinical assessment, sports performance analysis, and human motion research.  more » « less
Award ID(s):
2439345
PAR ID:
10661951
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
MDPI-Electronics
Date Published:
Journal Name:
Electronics
Volume:
14
Issue:
15
ISSN:
2079-9292
Page Range / eLocation ID:
3039
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Step length is a critical gait parameter that allows a quantitative assessment of gait asymmetry. Gait asymmetry can lead to many potential health threats such as joint degeneration, difficult balance control, and gait inefficiency. Therefore, accurate step length estimation is essential to understand gait asymmetry and provide appropriate clinical interventions or gait training programs. The conventional method for step length measurement relies on using foot-mounted inertial measurement units (IMUs). However, this may not be suitable for real-world applications due to sensor signal drift and the potential obtrusiveness of using distal sensors. To overcome this challenge, we propose a deep convolutional neural network-based step length estimation using only proximal wearable sensors (hip goniometer, trunk IMU, and thigh IMU) capable of generalizing to various walking speeds. To evaluate this approach, we utilized treadmill data collected from sixteen able-bodied subjects at different walking speeds. We tested our optimized model on the overground walking data. Our CNN model estimated the step length with an average mean absolute error of 2.89 ± 0.89 cm across all subjects and walking speeds. Since wearable sensors and CNN models are easily deployable in real-time, our study findings can provide personalized real-time step length monitoring in wearable assistive devices and gait training programs. 
    more » « less
  2. The Unified Parkinson’s Disease Rating Scale (UPDRS) is used to recognize patients with Parkinson’s disease (PD) and rate its severity. The rating is crucial for disease progression monitoring and treatment adjustment. This study aims to advance the capabilities of PD management by developing an innovative framework that integrates deep learning with wearable sensor technology to enhance the precision of UPDRS assessments. We introduce a series of deep learning models to estimate UPDRS Part III scores, utilizing motion data from wearable sensors. Our approach leverages a novel Multi-shared-task Self-supervised Convolutional Neural Network–Long Short-Term Memory (CNN-LSTM) framework that processes raw gyroscope signals and their spectrogram representations. This technique aims to refine the estimation accuracy of PD severity during naturalistic human activities. Utilizing 526 min of data from 24 PD patients engaged in everyday activities, our methodology demonstrates a strong correlation of 0.89 between estimated and clinically assessed UPDRS-III scores. This model outperforms the benchmark set by single and multichannel CNN, LSTM, and CNN-LSTM models and establishes a new standard in UPDRS-III score estimation for free-body movements compared to recent state-of-the-art methods. These results signify a substantial step forward in bioengineering applications for PD monitoring, providing a robust framework for reliable and continuous assessment of PD symptoms in daily living settings. 
    more » « less
  3. This work presents an integrated solution for head orientation estimation, which is a critical component for applications of virtual and augmented reality systems. The proposed solution builds upon the measurements from the inertial sensors and magnetometer added to an instrumented helmet, and an orientation estimation algorithm is developed to mitigate the effect of bias introduced by noise in the gyroscope signal. Convolutional Neural Network (CNN) techniques are introduced to develop a dynamic orientation estimation algorithm with a structure motivated by complementary filters and trained on data collected to represent a wide range of head motion profiles. The proposed orientation estimation method is evaluated experimentally and compared to both learning and non-learning-based orientation estimation algorithms found in the literature for comparable applications. Test results support the advantage of the proposed CNN-based solution, particularly for motion profiles with high acceleration disturbance that are characteristic of head motion. 
    more » « less
  4. null (Ed.)
    Abstract Background Unified Parkinson Disease Rating Scale-part III (UPDRS III) is part of the standard clinical examination performed to track the severity of Parkinson’s disease (PD) motor complications. Wearable technologies could be used to reduce the need for on-site clinical examinations of people with Parkinson’s disease (PwP) and provide a reliable and continuous estimation of the severity of PD at home. The reported estimation can be used to successfully adjust the dose and interval of PD medications. Methods We developed a novel algorithm for unobtrusive and continuous UPDRS-III estimation at home using two wearable inertial sensors mounted on the wrist and ankle. We used the ensemble of three deep-learning models to detect UPDRS-III-related patterns from a combination of hand-crafted features, raw temporal signals, and their time–frequency representation. Specifically, we used a dual-channel, Long Short-Term Memory (LSTM) for hand-crafted features, 1D Convolutional Neural Network (CNN)-LSTM for raw signals, and 2D CNN-LSTM for time–frequency data. We utilized transfer learning from activity recognition data and proposed a two-stage training for the CNN-LSTM networks to cope with the limited amount of data. Results The algorithm was evaluated on gyroscope data from 24 PwP as they performed different daily living activities. The estimated UPDRS-III scores had a correlation of $$0.79\, (\textit{p}<0.0001)$$ 0.79 ( p < 0.0001 ) and a mean absolute error of 5.95 with the clinical examination scores without requiring the patients to perform any specific tasks. Conclusion Our analysis demonstrates the potential of our algorithm for estimating PD severity scores unobtrusively at home. Such an algorithm could provide the required motor-complication measurements without unnecessary clinical visits and help the treating physician provide effective management of the disease. 
    more » « less
  5. Abstract The development of wearable technology, which enables motion tracking analysis for human movement outside the laboratory, can improve awareness of personal health and performance. This study used a wearable smart sock prototype to track foot–ankle kinematics during gait movement. Multivariable linear regression and two deep learning models, including long short-term memory (LSTM) and convolutional neural networks, were trained to estimate the joint angles in sagittal and frontal planes measured by an optical motion capture system. Participant-specific models were established for ten healthy subjects walking on a treadmill. The prototype was tested at various walking speeds to assess its ability to track movements for multiple speeds and generalize models for estimating joint angles in sagittal and frontal planes. LSTM outperformed other models with lower mean absolute error (MAE), lower root mean squared error, and higher R -squared values. The average MAE score was less than 1.138° and 0.939° in sagittal and frontal planes, respectively, when training models for each speed and 2.15° and 1.14° when trained and evaluated for all speeds. These results indicate wearable smart socks to generalize foot–ankle kinematics over various walking speeds with relatively low error and could consequently be used to measure gait parameters without the need for a lab-constricted motion capture system. 
    more » « less