skip to main content


Title: Activity Segmentation Using Wearable Sensors for DVT/PE Risk Detection
Using a wearable electromyography (EMG) and an accelerometer sensor, classification of subject activity state (i.e., walking, sitting, standing, or ankle circles) enables detection of prolonged "negative" activity states in which the calf muscles do not facilitate blood flow return via the deep veins of the leg. By employing machine learning classification on a multi-sensor wearable device, we are able to classify human subject state between "positive" and "negative" activities, and among each activity state, with greater than 95% accuracy. Some negative activity states cannot be accurately discriminated due to their similar presentation from an accelerometer (i.e., standing vs. sitting); however, it is desirable to separate these states to better inform the risk of developing a Deep Vein Thrombosis (DVT). Augmentation with a wearable EMG sensor improves separability of these activities by 30%.  more » « less
Award ID(s):
1816387
NSF-PAR ID:
10118762
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC)
Page Range / eLocation ID:
477 to 483
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Li-Jessen, Nicole Yee-Key (Ed.)
    The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model’s prediction accuracy on the Earable device’s classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings. 
    more » « less
  2. Parkinson’s Disease (PD) is a neurodegenerative disorder affecting the substantia nigra, which leads to more than half of PD patients are considered to be at high risk of falling. Recently, Inertial Measurement Unit (IMU) sensors have shown great promise in the classification of activities of daily living (ADL) such as walking, standing, sitting, and laying down, considered to be normal movement in daily life. Measuring physical activity level from longitudinal ADL monitoring among PD patients could provide insights into their fall mechanisms. In this study, six PD patients (mean age=74.3±6.5 years) and six young healthy subjects (mean age=19.7±2.7 years) were recruited. All the subjects were asked to wear the single accelerometer, DynaPort MM+ (Motion Monitor+, McRoberts BV, The Hague, Netherlands), with a sampling frequency of 100 Hz located at the L5-S1 spinal area for 3 days. Subjects maintained a log of activities they performed and only removed the sensor while showering or performing other aquatic activities. The resultant acceleration was filtered using high and low pass Butterworth filters to determine dynamic and stationary activities. As a result, it was found that healthy young subjects performed significantly more dynamic activities (13.2%) when compared to PD subjects (7%), in contrast, PD subjects (92.9%) had significantly more stationary activities than young healthy subjects (86.8%). 
    more » « less
  3. null (Ed.)
    Background : Machine learning has been used for classification of physical behavior bouts from hip-worn accelerometers; however, this research has been limited due to the challenges of directly observing and coding human behavior “in the wild.” Deep learning algorithms, such as convolutional neural networks (CNNs), may offer better representation of data than other machine learning algorithms without the need for engineered features and may be better suited to dealing with free-living data. The purpose of this study was to develop a modeling pipeline for evaluation of a CNN model on a free-living data set and compare CNN inputs and results with the commonly used machine learning random forest and logistic regression algorithms. Method : Twenty-eight free-living women wore an ActiGraph GT3X+ accelerometer on their right hip for 7 days. A concurrently worn thigh-mounted activPAL device captured ground truth activity labels. The authors evaluated logistic regression, random forest, and CNN models for classifying sitting, standing, and stepping bouts. The authors also assessed the benefit of performing feature engineering for this task. Results : The CNN classifier performed best (average balanced accuracy for bout classification of sitting, standing, and stepping was 84%) compared with the other methods (56% for logistic regression and 76% for random forest), even without performing any feature engineering. Conclusion : Using the recent advancements in deep neural networks, the authors showed that a CNN model can outperform other methods even without feature engineering. This has important implications for both the model’s ability to deal with the complexity of free-living data and its potential transferability to new populations. 
    more » « less
  4. Abstract: In the past few years, smart mobile devices have become ubiquitous. Most of these devices have embedded sensors such as GPS, accelerometer, gyroscope, etc. There is a growing trend to use these sensors for user identification and activity recognition. Most prior work, however, contains results on a small number of classifiers, data, or activities. We present a comprehensive evaluation often representative classifiers used in identification on two publicly available data sets (thus our work is reproducible). Our results include data obtained from dynamic activities, such as walking and running; static postures such as sitting and standing; and an aggregate of activities that combine dynamic, static, and postural transitions, such as sit-to-stand or stand-to-sit. Our identification results on aggregate data include both labeled and unlabeled activities. Our results show that the k-Nearest Neighbors algorithm consistently outperforms other classifiers. We also show that by extracting appropriate features and using appropriate classifiers, static and aggregate activities can be used for user identification. We posit that this work will serve as a resource and a benchmark for the selection and evaluation of classification algorithms for activity based identification on smartphones. 
    more » « less
  5. Background

    As mobile health (mHealth) studies become increasingly productive owing to the advancements in wearable and mobile sensor technology, our ability to monitor and model human behavior will be constrained by participant receptivity. Many health constructs are dependent on subjective responses, and without such responses, researchers are left with little to no ground truth to accompany our ever-growing biobehavioral data. This issue can significantly impact the quality of a study, particularly for populations known to exhibit lower compliance rates. To address this challenge, researchers have proposed innovative approaches that use machine learning (ML) and sensor data to modify the timing and delivery of surveys. However, an overarching concern is the potential introduction of biases or unintended influences on participants’ responses when implementing new survey delivery methods.

    Objective

    This study aims to demonstrate the potential impact of an ML-based ecological momentary assessment (EMA) delivery system (using receptivity as the predictor variable) on the participants’ reported emotional state. We examine the factors that affect participants’ receptivity to EMAs in a 10-day wearable and EMA–based emotional state–sensing mHealth study. We study the physiological relationships indicative of receptivity and affect while also analyzing the interaction between the 2 constructs.

    Methods

    We collected data from 45 healthy participants wearing 2 devices measuring electrodermal activity, accelerometer, electrocardiography, and skin temperature while answering 10 EMAs daily, containing questions about perceived mood. Owing to the nature of our constructs, we can only obtain ground truth measures for both affect and receptivity during responses. Therefore, we used unsupervised and supervised ML methods to infer affect when a participant did not respond. Our unsupervised method used k-means clustering to determine the relationship between physiology and receptivity and then inferred the emotional state during nonresponses. For the supervised learning method, we primarily used random forest and neural networks to predict the affect of unlabeled data points as well as receptivity.

    Results

    Our findings showed that using a receptivity model to trigger EMAs decreased the reported negative affect by >3 points or 0.29 SDs in our self-reported affect measure, scored between 13 and 91. The findings also showed a bimodal distribution of our predicted affect during nonresponses. This indicates that this system initiates EMAs more commonly during states of higher positive emotions.

    Conclusions

    Our results showed a clear relationship between affect and receptivity. This relationship can affect the efficacy of an mHealth study, particularly those that use an ML algorithm to trigger EMAs. Therefore, we propose that future work should focus on a smart trigger that promotes EMA receptivity without influencing affect during sampled time points.

     
    more » « less