skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1856441

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Objective: In this study, we aimed to develop a novel electromyography (EMG)-based neural machine interface (NMI), called the Neural Network-Musculoskeletal hybrid Model (N2M2), to decode continuous joint angles. Our approach combines the concepts of machine learning and musculoskeletal modeling. Methods: We compared our novel design with a musculoskeletal model (MM) and 2 continuous EMG decoders based on artificial neural networks (ANNs): multilayer perceptrons (MLPs) and nonlinear autoregressive neural networks with exogenous inputs (NARX networks). EMG and joint kinematics data were collected from 10 non-disabled and 1 transradial amputee subject. The offline performance tested across 3 different conditions (i.e., varied arm postures, shifted electrode locations, and noise-contaminated EMG signals) and online performance for a virtual postural matching task was quantified. Finally, we implemented the N2M2 to operate a prosthetic hand and tested functional task performance. Results: The N2M2 made more accurate predictions than the MLP in all postures and electrode locations (p < 0.003). For estimated MCP joint angles, the N2M2 was less sensitive to noisy EMG signals than the MM or NARX network with respect to error (p < 0.032) as well as the NARX network with respect to correlation (p = 0.007). Additionally, the N2M2 had better online task performance than the NARX network (p ≤ 0.030). Conclusion: Overall, we have found that combining the concepts of machine learning and musculoskeletal modeling has resulted in a more robust joint kinematics decoder than either concept individually. Significance: The outcome of this study may result in a novel, highly reliable controller for powered prosthetic hands. 
    more » « less
  2. null (Ed.)
    Abstract Reinforcement learning (RL) has potential to provide innovative solutions to existing challenges in estimating joint moments in motion analysis, such as kinematic or electromyography (EMG) noise and unknown model parameters. Here, we explore feasibility of RL to assist joint moment estimation for biomechanical applications. Forearm and hand kinematics and forearm EMGs from four muscles during free finger and wrist movement were collected from six healthy subjects. Using the proximal policy optimization approach, we trained two types of RL agents that estimated joint moment based on measured kinematics or measured EMGs, respectively. To quantify the performance of trained RL agents, the estimated joint moment was used to drive a forward dynamic model for estimating kinematics, which was then compared with measured kinematics using Pearson correlation coefficient. The results demonstrated that both trained RL agents are feasible to estimate joint moment for wrist and metacarpophalangeal (MCP) joint motion prediction. The correlation coefficients between predicted and measured kinematics, derived from the kinematics-driven agent and subject-specific EMG-driven agents, were 98% ± 1% and 94% ± 3% for the wrist, respectively, and were 95% ± 2% and 84% ± 6% for the metacarpophalangeal joint, respectively. In addition, a biomechanically reasonable joint moment-angle-EMG relationship (i.e., dependence of joint moment on joint angle and EMG) was predicted using only 15 s of collected data. In conclusion, this study illustrates that an RL approach can be an alternative technique to conventional inverse dynamic analysis in human biomechanics study and EMG-driven human-machine interfacing applications. 
    more » « less
  3. Computer vision has shown promising potential in wearable robotics applications (e.g., human grasping target prediction and context understanding). However, in practice, the performance of computer vision algorithms is challenged by insufficient or biased training, observation noise, cluttered background, etc. By leveraging Bayesian deep learning (BDL), we have developed a novel, reliable vision-based framework to assist upper limb prosthesis grasping during arm reaching. This framework can measure different types of uncertainties from the model and data for grasping target recognition in realistic and challenging scenarios. A probability calibration network was developed to fuse the uncertainty measures into one calibrated probability for online decision making. We formulated the problem as the prediction of grasping target while arm reaching. Specifically, we developed a 3-D simulation platform to simulate and analyze the performance of vision algorithms under several common challenging scenarios in practice. In addition, we integrated our approach into a shared control framework of a prosthetic arm and demonstrated its potential at assisting human participants with fluent target reaching and grasping tasks. 
    more » « less