skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Warren, David J"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Following tetraplegia, independence for completing essential daily tasks, such as opening doors and eating, significantly declines. Assistive robotic manipulators (ARMs) could restore independence, but typically input devices for these manipulators require functional use of the hands. We created and validated a hands-free multimodal input system for controlling an ARM in virtual reality using combinations of a gyroscope, eye-tracking, and heterologous surface electromyography (sEMG). These input modalities are mapped to ARM functions based on the user’s preferences and to maximize the utility of their residual volitional capabilities following tetraplegia. The two participants in this study with tetraplegia preferred to use the control mapping with sEMG button functions and disliked winking commands. Non-disabled participants were more varied in their preferences and performance, further suggesting that customizability is an advantageous component of the control system. Replacing buttons from a traditional handheld controller with sEMG did not substantively reduce performance. The system provided adequate control to all participants to complete functional tasks in virtual reality such as opening door handles, turning stove dials, eating, and drinking, all of which enable independence and improved quality of life for these individuals. 
    more » « less
  2. null (Ed.)
  3. null (Ed.)
  4. null (Ed.)
  5. Significance: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network-based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially he quality of live of people with limb loss. Objective: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals. Methods: The decoders are trained using dataset aggregation (DAgger) algorithm, in which the training data set is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods: polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolution neural networks (CNN), and Long-Short Term Memory (LSTM) networks, were developed. The performance of the four decoding methods was evaluated using EMG data sets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same data set, and long-term analyses training and testing were done in different data sets, were performed. Results: Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analysis indicated that the CNN, MLP and LSTM decoders performed significantly better than KF-based decoder at most analyzed cases of temporal separations (0 to 150 days) between the acquisition of the training and testing data sets. Conclusion: The short-term and long-term performance of MLP and CNN-based decoders trained with DAgger, demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches. 
    more » « less
  6. This paper presents a framework for shared, human-machine control of a prosthetic arm. The method employs electromyogram and peripheral neural signals to decode motor intent, and incorporates a higher-level goal in the controller to augment human effort. The controller derivation employs Markov Decision Processes. The system is trained using a gradient ascent approach in which the policy is parameterized using a Kalman Filter and the goal is incorporated by adapting the Kalman filter output online. Results of experimental performance analysis of the shared controller when the goal information is imperfect are presented in the paper. These results, obtained from an amputee subject and a subject with intact arms, demonstrate that a system controlled by the human user and the machine together exhibit better performance than systems employing machine-only or human-only control. 
    more » « less
  7. This paper presents a framework for modeling neural decoding using electromyogram (EMG) and electrocorticogram (ECoG) signals to interpret human intent and control prosthetic arms. Specifically, the method of this paper employs Markov Decision Processes (MDP) for neural decoding, parameterizing the policy using an artificial neural network. The system is trained using a modification of the Dataset Aggregation (DAgger) algorithm. The results presented here suggest that the approach of the paper performs better than the state-of-the-art. 
    more » « less