Wearable technologies for hand gesture classification are becoming increasingly prominent due to the growing need for more natural, human-centered control of complex devices. This need is particularly evident in emerging fields such as virtual reality and bionic prostheses, which require precise control with minimal delay. One method used for hand gesture recognition is force myography (FMG), which utilizes non-invasive pressure sensors to measure radial muscle forces on the skin’s surface of the forearm during hand movements. These sensors, typically force-sensitive resistors (FSRs), require additional circuitry to generate analog output signals, which are then classified using machine learning to derive corresponding control signals for the device. The performance of hand gesture classification can be influenced by the characteristics of this output signal, which may vary depending on the circuitry used. Our study examined three commonly used circuits in FMG systems: the voltage divider (VD), unity gain amplifier (UGA), and transimpedance amplifier (TIA). We first conducted benchtop testing of FSRs to characterize the impact of this circuitry on linearity, deadband, hysteresis, and drift, all metrics with the potential to influence an FMG system’s performance. To evaluate the circuit’s performance in hand gesture classification, we constructed an FMG band with 8 FSRs, using an adjustable Velcro strap and interchangeable circuitry. Wearing the FMG band, participants (N = 15) were instructed to perform 10 hand gestures commonly used in daily living. Our findings indicated that the UGA circuit outperformed others in minimizing hysteresis, drift and deadband with comparable results to the VD, while the TIA circuit excelled in ensuring linearity. Further, contemporary machine learning algorithms used to detect hand gestures were unaffected by the circuitry employed. These results suggest that applications of FMG requiring precise sensing of force values would likely benefit from use of the UGA. Alternatively, if hand gesture state classification is the only use case, developers can take advantage of benefits offered from using less complex circuitry such as the VD. 
                        more » 
                        « less   
                    This content will become publicly available on April 10, 2026
                            
                            The effects of limb position and grasped load on hand gesture classification using electromyography, force myography, and their combination
                        
                    
    
            Hand gesture classification is crucial for the control of many modern technologies, ranging from virtual and augmented reality systems to assistive mechatronic devices. A prominent control technique employs surface electromyography (EMG) and pattern recognition algorithms to identify specific patterns in muscle electrical activity and translate these to device commands. While being well established in consumer, clinical, and research applications, this technique suffers from misclassification errors caused by limb movements and the weight of manipulated objects, both vital aspects of how we use our hands in daily life. An emerging alternative control technique is force myography (FMG) which uses pattern recognition algorithms to predict hand gestures from the axial forces present at the skin’s surface created by contractions of the underlying muscles. As EMG and FMG capture different physiological signals associated with muscle contraction, we hypothesized that each may offer unique additional information for gesture classification, potentially improving classification accuracy in the presence of limb position and object loading effects. Thus, we tested the effect of limb position and grasped load on 3 different sensing modalities: EMG, FMG, and the fused combination of the two. 27 able-bodied participants performed a grasp and release task with 4 hand gestures at 8 positions and under 5 object weight conditions. We then examined the effects of limb position and grasped load on gesture classification accuracy across each sensing modality. It was found that position and grasped load had statistically significant effects on the classification performance of the 3 sensing modalities and that the combination of EMG and FMG provided the highest classification accuracy of hand gesture, limb position, and grasped load combinations (97.34%) followed by FMG (92.27%) and then EMG (82.84%). This points to the fact that the addition of FMG to traditional EMG control systems offers unique additional data for more effective device control and can help accommodate different limb positions and grasped object loads. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2152260
- PAR ID:
- 10609833
- Editor(s):
- Tigrini, Andrea
- Publisher / Repository:
- PLOS ONE
- Date Published:
- Journal Name:
- PLOS ONE
- Volume:
- 20
- Issue:
- 4
- ISSN:
- 1932-6203
- Page Range / eLocation ID:
- e0321319
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Clinical translation of “intelligent” lower-limb assistive technologies relies on robust control interfaces capable of accurately detecting user intent. To date, mechanical sensors and surface electromyography (EMG) have been the primary sensing modalities used to classify ambulation. Ultrasound (US) imaging can be used to detect user-intent by characterizing structural changes of muscle. Our study evaluates wearable US imaging as a new sensing modality for continuous classification of five discrete ambulation modes: level, incline, decline, stair ascent, and stair descent ambulation, and benchmarks performance relative to EMG sensing. Ten able-bodied subjects were equipped with a wearable US scanner and eight unilateral EMG sensors. Time-intensity features were recorded from US images of three thigh muscles. Features from sliding windows of EMG signals were analyzed in two configurations: one including 5 EMG sensors on muscles around the thigh, and another with 3 additional sensors placed on the shank. Linear discriminate analysis was implemented to continuously classify these phase-dependent features of each sensing modality as one of five ambulation modes. US-based sensing statistically improved mean classification accuracy to 99.8% (99.5-100% CI) compared to 8-EMG sensors (85.8%; 84.0-87.6% CI) and 5-EMG sensors (75.3%; 74.5-76.1% CI). Further, separability analyses show the importance of superficial and deep US information for stair classification relative to other modes. These results are the first to demonstrate the ability of US-based sensing to classify discrete ambulation modes, highlighting the potential for improved assistive device control using less widespread, less superficial and higher resolution sensing of skeletal muscle.more » « less
- 
            Research on robotic lower-limb assistive devices over the past decade has generated autonomous, multiple degree-of-freedom devices to augment human performance during a variety of scenarios. However, the increase in capabilities of these devices is met with an increase in the complexity of the overall control problem and requirement for an accurate and robust sensing modality for intent recognition. Due to its ability to precede changes in motion, surface electromyography (EMG) is widely studied as a peripheral sensing modality for capturing features of muscle activity as an input for control of powered assistive devices. In order to capture features that contribute to muscle contraction and joint motion beyond muscle activity of superficial muscles, researchers have introduced sonomyography, or real-time dynamic ultrasound imaging of skeletal muscle. However, the ability of these sonomyography features to continuously predict multiple lower-limb joint kinematics during widely varying ambulation tasks, and their potential as an input for powered multiple degree-of-freedom lower-limb assistive devices is unknown. The objective of this research is to evaluate surface EMG and sonomyography, as well as the fusion of features from both sensing modalities, as inputs to Gaussian process regression models for the continuous estimation of hip, knee and ankle angle and velocity during level walking, stair ascent/descent and ramp ascent/descent ambulation. Gaussian process regression is a Bayesian nonlinear regression model that has been introduced as an alternative to musculoskeletal model-based techniques. In this study, time-intensity features of sonomyography on both the anterior and posterior thigh along with time-domain features of surface EMG from eight muscles on the lower-limb were used to train and test subject-dependent and task-invariant Gaussian process regression models for the continuous estimation of hip, knee and ankle motion. Overall, anterior sonomyography sensor fusion with surface EMG significantly improved estimation of hip, knee and ankle motion for all ambulation tasks (level ground, stair and ramp ambulation) in comparison to surface EMG alone. Additionally, anterior sonomyography alone significantly improved errors at the hip and knee for most tasks compared to surface EMG. These findings help inform the implementation and integration of volitional control strategies for robotic assistive technologies.more » « less
- 
            null (Ed.)A reliable neural-machine interface is essential for humans to intuitively interact with advanced robotic hands in an unconstrained environment. Existing neural decoding approaches utilize either discrete hand gesture-based pattern recognition or continuous force decoding with one finger at a time. We developed a neural decoding technique that allowed continuous and concurrent prediction of forces of different fingers based on spinal motoneuron firing information. High-density skin-surface electromyogram (HD-EMG) signals of finger extensor muscle were recorded, while human participants produced isometric flexion forces in a dexterous manner (i.e. produced varying forces using either a single finger or multiple fingers concurrently). Motoneuron firing information was extracted from the EMG signals using a blind source separation technique, and each identified neuron was further classified to be associated with a given finger. The forces of individual fingers were then predicted concurrently by utilizing the corresponding motoneuron pool firing frequency of individual fingers. Compared with conventional approaches, our technique led to better prediction performances, i.e. a higher correlation ([Formula: see text] versus [Formula: see text]), a lower prediction error ([Formula: see text]% MVC versus [Formula: see text]% MVC), and a higher accuracy in finger state (rest/active) prediction ([Formula: see text]% versus [Formula: see text]%). Our decoding method demonstrated the possibility of classifying motoneurons for different fingers, which significantly alleviated the cross-talk issue of EMG recordings from neighboring hand muscles, and allowed the decoding of finger forces individually and concurrently. The outcomes offered a robust neural-machine interface that could allow users to intuitively control robotic hands in a dexterous manner.more » « less
- 
            Abstract This paper aims to present a potential cybersecurity risk existing in mixed reality (MR)-based smart manufacturing applications that decipher digital passwords through a single RGB camera to capture the user’s mid-air gestures. We first created a test bed, which is an MR-based smart factory management system consisting of mid-air gesture-based user interfaces (UIs) on a video see-through MR head-mounted display. To interact with UIs and input information, the user’s hand movements and gestures are tracked by the MR system. We setup the experiment to be the estimation of the password input by users through mid-air hand gestures on a virtual numeric keypad. To achieve this goal, we developed a lightweight machine learning-based hand position tracking and gesture recognition method. This method takes either video streaming or recorded video clips (taken by a single RGB camera in front of the user) as input, where the videos record the users’ hand movements and gestures but not the virtual UIs. With the assumption of the known size, position, and layout of the keypad, the machine learning method estimates the password through hand gesture recognition and finger position detection. The evaluation result indicates the effectiveness of the proposed method, with a high accuracy of 97.03%, 94.06%, and 83.83% for 2-digit, 4-digit, and 6-digit passwords, respectively, using real-time video streaming as input with known length condition. Under the unknown length condition, the proposed method reaches 85.50%, 76.15%, and 77.89% accuracy for 2-digit, 4-digit, and 6-digit passwords, respectively.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
