skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Characterizing Human Box-Lifting Behavior Using Wearable Inertial Motion Sensors
Although several studies have used wearable sensors to analyze human lifting, this has generally only been done in a limited manner. In this proof-of-concept study, we investigate multiple aspects of offline lift characterization using wearable inertial measurement sensors: detecting the start and end of the lift and classifying the vertical movement of the object, the posture used, the weight of the object, and the asymmetry involved. In addition, the lift duration, horizontal distance from the lifter to the object, the vertical displacement of the object, and the asymmetric angle are computed as lift parameters. Twenty-four healthy participants performed two repetitions of 30 different main lifts each while wearing a commercial inertial measurement system. The data from these trials were used to develop, train, and evaluate the lift characterization algorithms presented. The lift detection algorithm had a start time error of 0.10 s ± 0.21 s and an end time error of 0.36 s ± 0.27 s across all 1489 lift trials with no missed lifts. For posture, asymmetry, vertical movement, and weight, our classifiers achieved accuracies of 96.8%, 98.3%, 97.3%, and 64.2%, respectively, for automatically detected lifts. The vertical height and displacement estimates were, on average, within 25 cm of the reference values. The horizontal distances measured for some lifts were quite different than expected (up to 14.5 cm), but were very consistent. Estimated asymmetry angles were similarly precise. In the future, these proof-of-concept offline algorithms can be expanded and improved to work in real-time. This would enable their use in applications such as real-time health monitoring and feedback for assistive devices.  more » « less
Award ID(s):
1933409
PAR ID:
10207920
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Sensors
Volume:
20
Issue:
8
ISSN:
1424-8220
Page Range / eLocation ID:
2323
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Lifting and carrying heavy objects is a major aspect of physically intensive jobs. Wearable sensors have previously been used to classify different ways of picking up an object, but have seen only limited use for automatic classification of load position and weight while a person is walking and carrying an object. In this proof-of-concept study, we thus used wearable inertial and electromyographic sensors for offline classification of different load positions (frontal vs. unilateral vs. bilateral side loads) and weights during gait. Ten participants performed 19 different carrying trials each while wearing the sensors, and data from these trials were used to train and evaluate classification algorithms based on supervised machine learning. The algorithms differentiated between frontal and other loads (side/none) with an accuracy of 100%, between frontal vs. unilateral side load vs. bilateral side load with an accuracy of 96.1%, and between different load asymmetry levels with accuracies of 75–79%. While the study is limited by a lack of electromyographic sensors on the arms and a limited number of load positions/weights, it shows that wearable sensors can differentiate between different load positions and weights during gait with high accuracy. In the future, such approaches could be used to control assistive devices or for long-term worker monitoring in physically demanding occupations. 
    more » « less
  2. Back injuries and other occupational injuries are common in workers who engage in long, arduous physical labor. The risk of these injuries could be reduced using assistive devices that automatically detect an object lifting motion and support the user while they perform the lift; however, such devices must be able to detect the lifting motion as it occurs. We thus developed a system to detect the start and end of a lift (performed as a stoop or squat) in real time based on pelvic angle and the distance between the user's hands and the user's center of mass. The measurements were input to an algorithm that first searches for hand-center distance peaks in a sliding window, then checks the pelvic displacement angle to verify lift occurrence. The approach was tested with 5 participants, who performed a total of 100 lifts of four different types. The times of actual lifts were determined by manual video annotation. The median time error (absolute difference between detected and actual occurrence time) for lifts that were not false negatives was 0.11 s; a lift was considered a false negative if it was not detected within two seconds of it actually occurring. Furthermore, 95% of lifts that were detected occurred within 0.28 s of actual occurrence. This shows that it is possible to reliably detect lifts in real time based on the pelvic displacement angle and the distance between the user's hands and their center of mass. 
    more » « less
  3. Step length is a critical gait parameter that allows a quantitative assessment of gait asymmetry. Gait asymmetry can lead to many potential health threats such as joint degeneration, difficult balance control, and gait inefficiency. Therefore, accurate step length estimation is essential to understand gait asymmetry and provide appropriate clinical interventions or gait training programs. The conventional method for step length measurement relies on using foot-mounted inertial measurement units (IMUs). However, this may not be suitable for real-world applications due to sensor signal drift and the potential obtrusiveness of using distal sensors. To overcome this challenge, we propose a deep convolutional neural network-based step length estimation using only proximal wearable sensors (hip goniometer, trunk IMU, and thigh IMU) capable of generalizing to various walking speeds. To evaluate this approach, we utilized treadmill data collected from sixteen able-bodied subjects at different walking speeds. We tested our optimized model on the overground walking data. Our CNN model estimated the step length with an average mean absolute error of 2.89 ± 0.89 cm across all subjects and walking speeds. Since wearable sensors and CNN models are easily deployable in real-time, our study findings can provide personalized real-time step length monitoring in wearable assistive devices and gait training programs. 
    more » « less
  4. null (Ed.)
    Studies with e-textile sensors embedded in garments are typically performed on static and controlled phantom models that do not reflect the dynamic nature of wearables. Instead, our objective was to understand the noise e-textile sensors would experience during real-world scenarios. Three types of sleeves, made of loose, tight, and stretchy fabrics, were applied to a phantom arm, and the corresponding fabric movement was measured in three dimensions using physical markers and image-processing software. Our results showed that the stretchy fabrics allowed for the most consistent and predictable clothing-movement (average displacement of up to −2.3 ± 0.1 cm), followed by tight fabrics (up to −4.7 ± 0.2 cm), and loose fabrics (up to −3.6 ± 1.0 cm). In addition, the results demonstrated better performance of higher elasticity (average displacement of up to −2.3 ± 0.1 cm) over lower elasticity (average displacement of up to −3.8 ± 0.3 cm) stretchy fabrics. For a case study with an e-textile sensor that relies on wearable loops to monitor joint flexion, our modeling indicated errors as high as 65.7° for stretchy fabric with higher elasticity. The results from this study can (a) help quantify errors of e-textile sensors operating “in-the-wild,” (b) inform decisions regarding the optimal type of clothing-material used, and (c) ultimately empower studies on noise calibration for diverse e-textile sensing applications. 
    more » « less
  5. null (Ed.)
    Human activity recognition (HAR) is growing in popularity due to its wide-ranging applications in patient rehabilitation and movement disorders. HAR approaches typically start with collecting sensor data for the activities under consideration and then develop algorithms using the dataset. As such, the success of algorithms for HAR depends on the availability and quality of datasets. Most of the existing work on HAR uses data from inertial sensors on wearable devices or smartphones to design HAR algorithms. However, inertial sensors exhibit high noise that makes it difficult to segment the data and classify the activities. Furthermore, existing approaches typically do not make their data available publicly, which makes it difficult or impossible to obtain comparisons of HAR approaches. To address these issues, we present wearable HAR (w-HAR) which contains labeled data of seven activities from 22 users. Our dataset’s unique aspect is the integration of data from inertial and wearable stretch sensors, thus providing two modalities of activity information. The wearable stretch sensor data allows us to create variable-length segment data and ensure that each segment contains a single activity. We also provide a HAR framework to use w-HAR to classify the activities. To this end, we first perform a design space exploration to choose a neural network architecture for activity classification. Then, we use two online learning algorithms to adapt the classifier to users whose data are not included at design time. Experiments on the w-HAR dataset show that our framework achieves 95% accuracy while the online learning algorithms improve the accuracy by as much as 40%. 
    more » « less