skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Bilaterally Mirrored Movements Improve the Accuracy and Precision of Training Data for Supervised Learning of Neural or Myoelectric Prosthetic Control
Intuitive control of prostheses relies on training algorithms to correlate biological recordings to motor intent. The quality of the training dataset is critical to run-time performance, but it is difficult to label hand kinematics accurately after the hand has been amputated. We quantified the accuracy and precision of labeling hand kinematics for two different approaches: 1) assuming a participant is perfectly mimicking predetermined motions of a prosthesis (mimicked training), and 2) assuming a participant is perfectly mirroring their contralateral hand during identical bilateral movements (mirrored training). We compared these approaches in non-amputee individuals, using an infrared camera to track eight different joint angles of the hands in real-time. Aggregate data revealed that mimicked training does not account for biomechanical coupling or temporal changes in hand posture. Mirrored training was significantly more accurate and precise at labeling hand kinematics. However, when training a modified Kalman filter to estimate motor intent, the mimicked and mirrored training approaches were not significantly different. The results suggest that the mirrored training approach creates a more faithful but more complex dataset. Advanced algorithms, more capable of learning the complex mirrored training dataset, may yield better run-time prosthetic control.  more » « less
Award ID(s):
1901492 1901236
PAR ID:
10176668
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Annual International Conference of the IEEE Engineering in Medicine and Biology Society
ISSN:
2375-7477
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Working towards improved neuromyoelectric control of dexterous prosthetic hands, we explored how differences in training paradigms affect the subsequent online performance of two different motor-decode algorithms. Participants included two intact subjects and one participant who had undergone a recent transradial amputation after complex regional pain syndrome (CRPS) and multi-year disuse of the affected hand. During algorithm training sessions, participants actively mimicked hand movements appearing on a computer monitor. We varied both the duration of the hold-time (0.1 s or 5 s) at the end-point of each of six different digit and wrist movements, and the order in which the training movements were presented (random or sequential). We quantified the impact of these variations on two different motordecode algorithms, both having proportional, six-degree-offreedom (DOF) control: a modified Kalman filter (MKF) previously reported by this group, and a new approach - a convolutional neural network (CNN). Results showed that increasing the hold-time in the training set improved run-time performance. By contrast, presenting training movements in either random or sequential order had a variable and relatively modest effect on performance. The relative performance of the two decode algorithms varied according to the performance metric. This work represents the first-ever amputee use of a CNN for real-time, proportional six-DOF control of a prosthetic hand. Also novel was the testing of implanted high-channelcount devices for neuromyoelectric control shortly after amputation, following CRPS and long-term hand disuse. This work identifies key factors in the training of decode algorithms that improve their subsequent run-time performance. 
    more » « less
  2. Abstract Children born with congenital upper limb absence exhibit consistent and distinguishable levels of biological control over their affected muscles, assessed through surface electromyography (sEMG). This represents a significant advancement in determining how these children might utilize sEMG-controlled dexterous prostheses. Despite this potential, the efficacy of employing conventional sEMG classification techniques for children born with upper limb absence is uncertain, as these techniques have been optimized for adults with acquired amputations. Tuning sEMG classification algorithms for this population is crucial for facilitating the successful translation of dexterous prostheses. To support this effort, we collected sEMG data from a cohort of N = 9 children with unilateral congenital below-elbow deficiency as they attempted 11 hand movements, including rest. Five classification algorithms were used to decode motor intent, tuned with features from the time, frequency, and time–frequency domains. We derived the congenital feature set (CFS) from the participant-specific tuned feature sets, which exhibited generalizability across our cohort. The CFS offline classification accuracy across participants was 73.8% ± 13.8% for the 11 hand movements and increased to 96.5% ± 6.6% when focusing on a reduced set of five movements. These results highlight the potential efficacy of individuals born with upper limb absence to control dexterous prostheses through sEMG interfaces. 
    more » « less
  3. Previous literature shows that deep learning is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, deep neural networks are often computationally complex and not feasible to work in real-time. Here we investigate different approaches' advantages and disadvantages to enhance the deep learning-based motor decoding paradigm's efficiency and inform its future implementation in real-time. Our data are recorded from the amputee's residual peripheral nerves. While the primary analysis is offline, the nerve data is cut using a sliding window to create a “pseudo-online” dataset that resembles the conditions in a real-time paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality, which later helps substantially lower the motor decoder's complexity, making it feasible for translation to a real-time paradigm. Next, we investigate two different strategies for deploying deep learning models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. This research predicts five individual finger movements and four combinations of the fingers. The 1S approach using a recurrent neural network (RNN) to concurrently predict all fingers' trajectories generally gives better prediction results than all the machine learning algorithms that do the same task. This result reaffirms that deep learning is more advantageous than classic machine learning methods for handling a large dataset. However, when training on a smaller input data set in the 2S approach, which includes a classification stage to identify active fingers before predicting their trajectories, machine learning techniques offer a simpler implementation while ensuring comparably good decoding outcomes to the deep learning ones. In the classification step, either machine learning or deep learning models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in a comparable mean squared error (MSE) and variance accounted for (VAF) scores as those of the 1S approach. Our study outlines the trade-offs to inform the future implementation of real-time, low-latency, and high accuracy deep learning-based motor decoder for clinical applications. 
    more » « less
  4. Significance: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network-based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially he quality of live of people with limb loss. Objective: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals. Methods: The decoders are trained using dataset aggregation (DAgger) algorithm, in which the training data set is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods: polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolution neural networks (CNN), and Long-Short Term Memory (LSTM) networks, were developed. The performance of the four decoding methods was evaluated using EMG data sets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same data set, and long-term analyses training and testing were done in different data sets, were performed. Results: Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analysis indicated that the CNN, MLP and LSTM decoders performed significantly better than KF-based decoder at most analyzed cases of temporal separations (0 to 150 days) between the acquisition of the training and testing data sets. Conclusion: The short-term and long-term performance of MLP and CNN-based decoders trained with DAgger, demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches. 
    more » « less
  5. Motor impairments resulting from neurological disorders, such as strokes or spinal cord injuries, often impair hand and finger mobility, restricting a person’s ability to grasp and perform fine motor tasks. Brain plasticity refers to the inherent capability of the central nervous system to functionally and structurally reorganize itself in response to stimulation, which underpins rehabilitation from brain injuries or strokes. Linking voluntary cortical activity with corresponding motor execution has been identified as effective in promoting adaptive plasticity. This study introduces NeuroFlex, a motion-intent-controlled soft robotic glove for hand rehabilitation. NeuroFlex utilizes a transformer-based deep learning (DL) architecture to decode motion intent from motor imagery (MI) EEG data and translate it into control inputs for the assistive glove. The glove’s soft, lightweight, and flexible design enables users to perform rehabilitation exercises involving fist formation and grasping movements, aligning with natural hand functions for fine motor practices. The results show that the accuracy of decoding the intent of fingers making a fist from MI EEG can reach up to 85.3%, with an average AUC of 0.88. NeuroFlex demonstrates the feasibility of detecting and assisting the patient’s attempted movements using pure thinking through a non-intrusive brain–computer interface (BCI). This EEG-based soft glove aims to enhance the effectiveness and user experience of rehabilitation protocols, providing the possibility of extending therapeutic opportunities outside clinical settings. 
    more » « less