skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Load Position and Weight Classification during Carrying Gait Using Wearable Inertial and Electromyographic Sensors
Lifting and carrying heavy objects is a major aspect of physically intensive jobs. Wearable sensors have previously been used to classify different ways of picking up an object, but have seen only limited use for automatic classification of load position and weight while a person is walking and carrying an object. In this proof-of-concept study, we thus used wearable inertial and electromyographic sensors for offline classification of different load positions (frontal vs. unilateral vs. bilateral side loads) and weights during gait. Ten participants performed 19 different carrying trials each while wearing the sensors, and data from these trials were used to train and evaluate classification algorithms based on supervised machine learning. The algorithms differentiated between frontal and other loads (side/none) with an accuracy of 100%, between frontal vs. unilateral side load vs. bilateral side load with an accuracy of 96.1%, and between different load asymmetry levels with accuracies of 75–79%. While the study is limited by a lack of electromyographic sensors on the arms and a limited number of load positions/weights, it shows that wearable sensors can differentiate between different load positions and weights during gait with high accuracy. In the future, such approaches could be used to control assistive devices or for long-term worker monitoring in physically demanding occupations.  more » « less
Award ID(s):
1933409
PAR ID:
10207922
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Sensors
Volume:
20
Issue:
17
ISSN:
1424-8220
Page Range / eLocation ID:
4963
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Tigrini, Andrea (Ed.)
    Hand gesture classification is crucial for the control of many modern technologies, ranging from virtual and augmented reality systems to assistive mechatronic devices. A prominent control technique employs surface electromyography (EMG) and pattern recognition algorithms to identify specific patterns in muscle electrical activity and translate these to device commands. While being well established in consumer, clinical, and research applications, this technique suffers from misclassification errors caused by limb movements and the weight of manipulated objects, both vital aspects of how we use our hands in daily life. An emerging alternative control technique is force myography (FMG) which uses pattern recognition algorithms to predict hand gestures from the axial forces present at the skin’s surface created by contractions of the underlying muscles. As EMG and FMG capture different physiological signals associated with muscle contraction, we hypothesized that each may offer unique additional information for gesture classification, potentially improving classification accuracy in the presence of limb position and object loading effects. Thus, we tested the effect of limb position and grasped load on 3 different sensing modalities: EMG, FMG, and the fused combination of the two. 27 able-bodied participants performed a grasp and release task with 4 hand gestures at 8 positions and under 5 object weight conditions. We then examined the effects of limb position and grasped load on gesture classification accuracy across each sensing modality. It was found that position and grasped load had statistically significant effects on the classification performance of the 3 sensing modalities and that the combination of EMG and FMG provided the highest classification accuracy of hand gesture, limb position, and grasped load combinations (97.34%) followed by FMG (92.27%) and then EMG (82.84%). This points to the fact that the addition of FMG to traditional EMG control systems offers unique additional data for more effective device control and can help accommodate different limb positions and grasped object loads. 
    more » « less
  2. A bilingual environment is associated with changes in the brain's structure and function. Some suggest that bilingualism also improves higher‐cognitive functions in infants as young as 6‐months, yet whether this effect is associated with changes in the infant brain remains unknown. In the present study, we measured brain activity using functional near‐infrared spectroscopy in monolingual‐ and bilingual‐raised 6‐ and 10‐month‐old infants. Infants completed an orienting attention task, in which a cue was presented prior to an object appearing on the same (Valid) or opposite (Invalid) side of a display. Task performance did not differ between the groups but neural activity did. At 6‐months, both groups showed greater activity for Valid (> Invalid) trials in frontal regions (left hemisphere for bilinguals, right hemisphere for monolinguals). At 10‐months, bilinguals showed greater activity for Invalid (> Valid) trials in bilateral frontal regions, while monolinguals showed greater brain activity for Valid (> Invalid) trials in left frontal regions. Bilinguals’ brain activity trended with their parents’ reporting of dual‐language mixing when speaking to their child. These findings are the first to indicate how early (dual) language experience can alter the cortical organization underlying broader, non‐linguistic cognitive functions during the first year of life. 
    more » « less
  3. The purpose of this study was to use 3D motion capture and stretchable soft robotic sensors (SRS) to collect foot-ankle movement on participants performing walking gait cycles on flat and sloped surfaces. The primary aim was to assess differences between 3D motion capture and a new SRS-based wearable solution. Given the complex nature of using a linear solution to accurately quantify the movement of triaxial joints during a dynamic gait movement, 20 participants performing multiple walking trials were measured. The participant gait data was then upscaled (for the SRS), time-aligned (based on right heel strikes), and smoothed using filtering methods. A multivariate linear model was developed to assess goodness-of-fit based on mean absolute error (MAE; 1.54), root mean square error (RMSE; 1.96), and absolute R2 (R2; 0.854). Two and three SRS combinations were evaluated to determine if similar fit scores could be achieved using fewer sensors. Inversion (based on MAE and RMSE) and plantar flexion (based on R2) sensor removal provided second-best fit scores. Given that the scores indicate a high level of fit, with further development, an SRS-based wearable solution has the potential to measure motion during gait- based tasks with the accuracy of a 3D motion capture system. 
    more » « less
  4. null (Ed.)
    Although several studies have used wearable sensors to analyze human lifting, this has generally only been done in a limited manner. In this proof-of-concept study, we investigate multiple aspects of offline lift characterization using wearable inertial measurement sensors: detecting the start and end of the lift and classifying the vertical movement of the object, the posture used, the weight of the object, and the asymmetry involved. In addition, the lift duration, horizontal distance from the lifter to the object, the vertical displacement of the object, and the asymmetric angle are computed as lift parameters. Twenty-four healthy participants performed two repetitions of 30 different main lifts each while wearing a commercial inertial measurement system. The data from these trials were used to develop, train, and evaluate the lift characterization algorithms presented. The lift detection algorithm had a start time error of 0.10 s ± 0.21 s and an end time error of 0.36 s ± 0.27 s across all 1489 lift trials with no missed lifts. For posture, asymmetry, vertical movement, and weight, our classifiers achieved accuracies of 96.8%, 98.3%, 97.3%, and 64.2%, respectively, for automatically detected lifts. The vertical height and displacement estimates were, on average, within 25 cm of the reference values. The horizontal distances measured for some lifts were quite different than expected (up to 14.5 cm), but were very consistent. Estimated asymmetry angles were similarly precise. In the future, these proof-of-concept offline algorithms can be expanded and improved to work in real-time. This would enable their use in applications such as real-time health monitoring and feedback for assistive devices. 
    more » « less
  5. We explore the effect of auxiliary labels in improving the classification accuracy of wearable sensor-based human activity recognition (HAR) systems, which are primarily trained with the supervision of the activity labels (e.g. running, walking, jumping). Supplemental meta-data are often available during the data collection process such as body positions of the wearable sensors, subjects' demographic information (e.g. gender, age), and the type of wearable used (e.g. smartphone, smart-watch). This information, while not directly related to the activity classification task, can nonetheless provide auxiliary supervision and has the potential to significantly improve the HAR accuracy by providing extra guidance on how to handle the introduced sample heterogeneity from the change in domains (i.e positions, persons, or sensors), especially in the presence of limited activity labels. However, integrating such meta-data information in the classification pipeline is non-trivial - (i) the complex interaction between the activity and domain label space is hard to capture with a simple multi-task and/or adversarial learning setup, (ii) meta-data and activity labels might not be simultaneously available for all collected samples. To address these issues, we propose a novel framework Conditional Domain Embeddings (CoDEm). From the available unlabeled raw samples and their domain meta-data, we first learn a set of domain embeddings using a contrastive learning methodology to handle inter-domain variability and inter-domain similarity. To classify the activities, CoDEm then learns the label embeddings in a contrastive fashion, conditioned on domain embeddings with a novel attention mechanism, enforcing the model to learn the complex domain-activity relationships. We extensively evaluate CoDEm in three benchmark datasets against a number of multi-task and adversarial learning baselines and achieve state-of-the-art performance in each avenue. 
    more » « less