skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 1, 2026

Title: Multi-gesture drag-and-drop decoding in a 2D iBCI control task
Abstract Objective. Intracortical brain–computer interfaces (iBCIs) have demonstrated the ability to enable point and click as well as reach and grasp control for people with tetraplegia. However, few studies have investigated iBCIs during long-duration discrete movements that would enable common computer interactions such as ‘click-and-hold’ or ‘drag-and-drop’.Approach. Here, we examined the performance of multi-class and binary (attempt/no-attempt) classification of neural activity in the left precentral gyrus of two BrainGate2 clinical trial participants performing hand gestures for 1, 2, and 4 s in duration. We then designed a novel ‘latch decoder’ that utilizes parallel multi-class and binary decoding processes and evaluated its performance on data from isolated sustained gesture attempts and a multi-gesture drag-and-drop task.Main results. Neural activity during sustained gestures revealed a marked decrease in the discriminability of hand gestures sustained beyond 1 s. Compared to standard direct decoding methods, the Latch decoder demonstrated substantial improvement in decoding accuracy for gestures performed independently or in conjunction with simultaneous 2D cursor control.Significance. This work highlights the unique neurophysiologic response patterns of sustained gesture attempts in human motor cortex and demonstrates a promising decoding approach that could enable individuals with tetraplegia to intuitively control a wider range of consumer electronics using an iBCI.  more » « less
Award ID(s):
2152260
PAR ID:
10609818
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
PubMed Central
Date Published:
Journal Name:
Journal of Neural Engineering
Volume:
22
Issue:
2
ISSN:
1741-2560
Page Range / eLocation ID:
026054
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Objective. Decoding neural activity from ventral (speech) motor cortex is known to enable high-performance speech brain-computer interface (BCI) control. It was previously unknown whether this brain area could also enable computer control via neural cursor and click, as is typically associated with dorsal (arm and hand) motor cortex. Approach. We recruited a clinical trial participant with ALS and implanted intracortical microelectrode arrays in ventral precentral gyrus (vPCG), which the participant used to operate a speech BCI in a prior study. We developed a cursor BCI driven by the participant’s vPCG neural activity, and evaluated performance on a series of target selection tasks. Main results. The reported vPCG cursor BCI enabled rapidly-calibrating (40 seconds), accurate (2.90 bits per second) cursor control and click. The participant also used the BCI to control his own personal computer independently. Significance. These results suggest that placing electrodes in vPCG to optimize for speech decoding may also be a viable strategy for building a multi-modal BCI which enables both speech-based communication and computer control via cursor and click. (BrainGate2 ClinicalTrials.gov ID NCT00912041) 
    more » « less
  2. Brain-machine interfaces (BMIs) have become increasingly popular in restoring the lost motor function in individuals with disabilities. Several research studies suggest that the CNS may employ synergies or movement primitives to reduce the complexity of control rather than controlling each DoF independently, and the synergies can be used as an optimal control mechanism by the CNS in simplifying and achieving complex movements. Our group has previously demonstrated neural decoding of synergy-based hand movements and used synergies effectively in driving hand exoskeletons. In this study, ten healthy right-handed participants were asked to perform six types of hand grasps representative of the activities of daily living while their neural activities were recorded using electroencephalography (EEG). From half of the participants, hand kinematic synergies were derived, and a neural decoder was developed, based on the correlation between hand synergies and corresponding cortical activity, using multivariate linear regression. Using the synergies and the neural decoder derived from the first half of the participants and only cortical activities from the remaining half of the participants, their hand kinematics were reconstructed with an average accuracy above 70%. Potential applications of synergy-based BMIs for controlling assistive devices in individuals with upper limb motor deficits, implications of the results in individuals with stroke and the limitations of the study were discussed. 
    more » « less
  3. Wearable internet of things (IoT) devices can enable a variety of biomedical applications, such as gesture recognition, health monitoring, and human activity tracking. Size and weight constraints limit the battery capacity, which leads to frequent charging requirements and user dissatisfaction. Minimizing the energy consumption not only alleviates this problem, but also paves the way for self-powered devices that operate on harvested energy. This paper considers an energy-optimal gesture recognition application that runs on energy-harvesting devices. We first formulate an optimization problem for maximizing the number of recognized gestures when energy budget and accuracy constraints are given. Next, we derive an analytical energy model from the power consumption measurements using a wearable IoT device prototype. Then, we prove that maximizing the number of recognized gestures is equivalent to minimizing the duration of gesture recognition. Finally, we utilize this result to construct an optimization technique that maximizes the number of gestures recognized under the energy budget constraints while satisfying the recognition accuracy requirements. Our extensive evaluations demonstrate that the proposed analytical model is valid for wearable IoT applications, and the optimization approach increases the number of recognized gestures by up to 2.4× compared to a manual optimization. 
    more » « less
  4. Tigrini, Andrea (Ed.)
    Hand gesture classification is crucial for the control of many modern technologies, ranging from virtual and augmented reality systems to assistive mechatronic devices. A prominent control technique employs surface electromyography (EMG) and pattern recognition algorithms to identify specific patterns in muscle electrical activity and translate these to device commands. While being well established in consumer, clinical, and research applications, this technique suffers from misclassification errors caused by limb movements and the weight of manipulated objects, both vital aspects of how we use our hands in daily life. An emerging alternative control technique is force myography (FMG) which uses pattern recognition algorithms to predict hand gestures from the axial forces present at the skin’s surface created by contractions of the underlying muscles. As EMG and FMG capture different physiological signals associated with muscle contraction, we hypothesized that each may offer unique additional information for gesture classification, potentially improving classification accuracy in the presence of limb position and object loading effects. Thus, we tested the effect of limb position and grasped load on 3 different sensing modalities: EMG, FMG, and the fused combination of the two. 27 able-bodied participants performed a grasp and release task with 4 hand gestures at 8 positions and under 5 object weight conditions. We then examined the effects of limb position and grasped load on gesture classification accuracy across each sensing modality. It was found that position and grasped load had statistically significant effects on the classification performance of the 3 sensing modalities and that the combination of EMG and FMG provided the highest classification accuracy of hand gesture, limb position, and grasped load combinations (97.34%) followed by FMG (92.27%) and then EMG (82.84%). This points to the fact that the addition of FMG to traditional EMG control systems offers unique additional data for more effective device control and can help accommodate different limb positions and grasped object loads. 
    more » « less
  5. null (Ed.)
    A reliable neural-machine interface is essential for humans to intuitively interact with advanced robotic hands in an unconstrained environment. Existing neural decoding approaches utilize either discrete hand gesture-based pattern recognition or continuous force decoding with one finger at a time. We developed a neural decoding technique that allowed continuous and concurrent prediction of forces of different fingers based on spinal motoneuron firing information. High-density skin-surface electromyogram (HD-EMG) signals of finger extensor muscle were recorded, while human participants produced isometric flexion forces in a dexterous manner (i.e. produced varying forces using either a single finger or multiple fingers concurrently). Motoneuron firing information was extracted from the EMG signals using a blind source separation technique, and each identified neuron was further classified to be associated with a given finger. The forces of individual fingers were then predicted concurrently by utilizing the corresponding motoneuron pool firing frequency of individual fingers. Compared with conventional approaches, our technique led to better prediction performances, i.e. a higher correlation ([Formula: see text] versus [Formula: see text]), a lower prediction error ([Formula: see text]% MVC versus [Formula: see text]% MVC), and a higher accuracy in finger state (rest/active) prediction ([Formula: see text]% versus [Formula: see text]%). Our decoding method demonstrated the possibility of classifying motoneurons for different fingers, which significantly alleviated the cross-talk issue of EMG recordings from neighboring hand muscles, and allowed the decoding of finger forces individually and concurrently. The outcomes offered a robust neural-machine interface that could allow users to intuitively control robotic hands in a dexterous manner. 
    more » « less