skip to main content


Title: Activity Segmentation Using Wearable Sensors for DVT/PE Risk Detection
Using a wearable electromyography (EMG) and an accelerometer sensor, classification of subject activity state (i.e., walking, sitting, standing, or ankle circles) enables detection of prolonged "negative" activity states in which the calf muscles do not facilitate blood flow return via the deep veins of the leg. By employing machine learning classification on a multi-sensor wearable device, we are able to classify human subject state between "positive" and "negative" activities, and among each activity state, with greater than 95% accuracy. Some negative activity states cannot be accurately discriminated due to their similar presentation from an accelerometer (i.e., standing vs. sitting); however, it is desirable to separate these states to better inform the risk of developing a Deep Vein Thrombosis (DVT). Augmentation with a wearable EMG sensor improves separability of these activities by 30%.  more » « less
Award ID(s):
1816387
NSF-PAR ID:
10118762
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC)
Page Range / eLocation ID:
477 to 483
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Li-Jessen, Nicole Yee-Key (Ed.)
    The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings. 
    more » « less
  2. Parkinson’s Disease (PD) is a neurodegenerative disorder affecting the substantia nigra, which leads to more than half of PD patients are considered to be at high risk of falling. Recently, Inertial Measurement Unit (IMU) sensors have shown great promise in the classification of activities of daily living (ADL) such as walking, standing, sitting, and laying down, considered to be normal movement in daily life. Measuring physical activity level from longitudinal ADL monitoring among PD patients could provide insights into their fall mechanisms. In this study, six PD patients (mean age=74.3±6.5 years) and six young healthy subjects (mean age=19.7±2.7 years) were recruited. All the subjects were asked to wear the single accelerometer, DynaPort MM+ (Motion Monitor+, McRoberts BV, The Hague, Netherlands), with a sampling frequency of 100 Hz located at the L5-S1 spinal area for 3 days. Subjects maintained a log of activities they performed and only removed the sensor while showering or performing other aquatic activities. The resultant acceleration was filtered using high and low pass Butterworth filters to determine dynamic and stationary activities. As a result, it was found that healthy young subjects performed significantly more dynamic activities (13.2%) when compared to PD subjects (7%), in contrast, PD subjects (92.9%) had significantly more stationary activities than young healthy subjects (86.8%). 
    more » « less
  3. null (Ed.)
    Background : Machine learning has been used for classification of physical behavior bouts from hip-worn accelerometers; however, this research has been limited due to the challenges of directly observing and coding human behavior “in the wild.” Deep learning algorithms, such as convolutional neural networks (CNNs), may offer better representation of data than other machine learning algorithms without the need for engineered features and may be better suited to dealing with free-living data. The purpose of this study was to develop a modeling pipeline for evaluation of a CNN model on a free-living data set and compare CNN inputs and results with the commonly used machine learning random forest and logistic regression algorithms. Method : Twenty-eight free-living women wore an ActiGraph GT3X+ accelerometer on their right hip for 7 days. A concurrently worn thigh-mounted activPAL device captured ground truth activity labels. The authors evaluated logistic regression, random forest, and CNN models for classifying sitting, standing, and stepping bouts. The authors also assessed the benefit of performing feature engineering for this task. Results : The CNN classifier performed best (average balanced accuracy for bout classification of sitting, standing, and stepping was 84%) compared with the other methods (56% for logistic regression and 76% for random forest), even without performing any feature engineering. Conclusion : Using the recent advancements in deep neural networks, the authors showed that a CNN model can outperform other methods even without feature engineering. This has important implications for both the model’s ability to deal with the complexity of free-living data and its potential transferability to new populations. 
    more » « less
  4. Abstract: In the past few years, smart mobile devices have become ubiquitous. Most of these devices have embedded sensors such as GPS, accelerometer, gyroscope, etc. There is a growing trend to use these sensors for user identification and activity recognition. Most prior work, however, contains results on a small number of classifiers, data, or activities. We present a comprehensive evaluation often representative classifiers used in identification on two publicly available data sets (thus our work is reproducible). Our results include data obtained from dynamic activities, such as walking and running; static postures such as sitting and standing; and an aggregate of activities that combine dynamic, static, and postural transitions, such as sit-to-stand or stand-to-sit. Our identification results on aggregate data include both labeled and unlabeled activities. Our results show that the k-Nearest Neighbors algorithm consistently outperforms other classifiers. We also show that by extracting appropriate features and using appropriate classifiers, static and aggregate activities can be used for user identification. We posit that this work will serve as a resource and a benchmark for the selection and evaluation of classification algorithms for activity based identification on smartphones. 
    more » « less
  5. Abstract Background

    The research gap addressed in this study is the applicability of deep neural network (NN) models on wearable sensor data to recognize different activities performed by patients with Parkinson’s Disease (PwPD) and the generalizability of these models to PwPD using labeled healthy data.

    Methods

    The experiments were carried out utilizing three datasets containing wearable motion sensor readings on common activities of daily living. The collected readings were from two accelerometer sensors. PAMAP2 and MHEALTH are publicly available datasets collected from 10 and 9 healthy, young subjects, respectively. A private dataset of a similar nature collected from 14 PwPD patients was utilized as well. Deep NN models were implemented with varying levels of complexity to investigate the impact of data augmentation, manual axis reorientation, model complexity, and domain adaptation on activity recognition performance.

    Results

    A moderately complex model trained on the augmented PAMAP2 dataset and adapted to the Parkinson domain using domain adaptation achieved the best activity recognition performance with an accuracy of 73.02%, which was significantly higher than the accuracy of 63% reported in previous studies. The model’s F1 score of 49.79% significantly improved compared to the best cross-testing of 33.66% F1 score with only data augmentation and 2.88% F1 score without data augmentation or domain adaptation.

    Conclusion

    These findings suggest that deep NN models originating on healthy data have the potential to recognize activities performed by PwPD accurately and that data augmentation and domain adaptation can improve the generalizability of models in the healthy-to-PwPD transfer scenario. The simple/moderately complex architectures tested in this study could generalize better to the PwPD domain when trained on a healthy dataset compared to the most complex architectures used. The findings of this study could contribute to the development of accurate wearable-based activity monitoring solutions for PwPD, improving clinical decision-making and patient outcomes based on patient activity levels.

     
    more » « less