Significance: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network-based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially he quality of live of people with limb loss. Objective: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals. Methods: The decoders are trained using dataset aggregation (DAgger) algorithm, in which the training data set is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods: polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolution neural networks (CNN), and Long-Short Term Memory (LSTM) networks, were developed. The performance of the four decoding methods was evaluated using EMG data sets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same data set, and long-term analyses training and testing were done in different data sets, were performed. Results: Short-term analyses ofmore »
Neural decoding systems using Markov Decision Processes
This paper presents a framework for modeling neural decoding using electromyogram (EMG) and electrocorticogram (ECoG) signals to interpret human intent and control prosthetic arms. Specifically, the method of this paper employs Markov Decision Processes (MDP) for neural decoding, parameterizing the policy using an artificial neural network. The system is trained using a modification of the Dataset Aggregation (DAgger) algorithm. The results presented here suggest that the approach of the paper performs better than the state-of-the-art.
- Award ID(s):
- 1533649
- Publication Date:
- NSF-PAR ID:
- 10121191
- Journal Name:
- 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- Page Range or eLocation-ID:
- 974 to 978
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Objective . Neural decoding is an important tool in neural engineering and neural data analysis. Of various machine learning algorithms adopted for neural decoding, the recently introduced deep learning is promising to excel. Therefore, we sought to apply deep learning to decode movement trajectories from the activity of motor cortical neurons. Approach . In this paper, we assessed the performance of deep learning methods in three different decoding schemes, concurrent, time-delay, and spatiotemporal. In the concurrent decoding scheme where the input to the network is the neural activity coincidental to the movement, deep learning networks including artificial neural network (ANN) and long-short term memory (LSTM) were applied to decode movement and compared with traditional machine learning algorithms. Both ANN and LSTM were further evaluated in the time-delay decoding scheme in which temporal delays are allowed between neural signals and movements. Lastly, in the spatiotemporal decoding scheme, we trained convolutional neural network (CNN) to extract movement information from images representing the spatial arrangement of neurons, their activity, and connectomes (i.e. the relative strengths of connectivity between neurons) and combined CNN and ANN to develop a hybrid spatiotemporal network. To reveal the input features of the CNN in the hybrid networkmore »
-
Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores. These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models. Introducing the benefits of structure to inform neural models presents a methodological challenge. In this paper, we present a structured tuning framework to improve mod-els using softened constraints only at training time. Our framework leverages the expressive-ness of neural networks and provides supervision with structured loss components. We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints. Additionally, our experiments with smaller training sizes show that we can achieve consistent improvements under low-resource scenarios
-
Recent studies have found that the position of mice or rats can be decoded from calcium imaging of brain activity offline. However, given the complex analysis pipeline, real-time position decoding remains a challenging task, especially considering strict requirements on hardware usage and energy cost for closed-loop feedback applications. In this paper, we propose two neural network based methods and corresponding hardware designs for real-time position decoding from calcium images. Our methods are based on: 1) convolutional neural network (CNN), 2) spiking neural network (SNN) converted from the CNN. We implemented quantized CNN and SNN models on FPGA. Evaluation results show that the CNN and the SNN methods achieve 56.3%/83.1% and 56.0%/82.8% Hit-1/Hit-3 accuracy for the position decoding across different rats, respectively. We also observed an accuracy-latency tradeoff of the SNN method in decoding positions under various time steps. Finally, we present our SNN implementation on the neuromorphic chip Loihi. Index Terms—calcium image, decoding, neural network.
-
Abstract Objective. Advanced robotic lower limb prostheses are mainly controlled autonomously. Although the existing control can assist cyclic movements during locomotion of amputee users, the function of these modern devices is still limited due to the lack of neuromuscular control (i.e. control based on human efferent neural signals from the central nervous system to peripheral muscles for movement production). Neuromuscular control signals can be recorded from muscles, called electromyographic (EMG) or myoelectric signals. In fact, using EMG signals for robotic lower limb prostheses control has been an emerging research topic in the field for the past decade to address novel prosthesis functionality and adaptability to different environments and task contexts. The objective of this paper is to review robotic lower limb Prosthesis control via EMG signals recorded from residual muscles in individuals with lower limb amputations. Approach. We performed a literature review on surgical techniques for enhanced EMG interfaces, EMG sensors, decoding algorithms, and control paradigms for robotic lower limb prostheses. Main results. This review highlights the promise of EMG control for enabling new functionalities in robotic lower limb prostheses, as well as the existing challenges, knowledge gaps, and opportunities on this research topic from human motor control and clinicalmore »