skip to main content


Title: Automated Cognitive Health Assessment Using Partially Complete Time Series Sensor Data
Abstract Background Behavior and health are inextricably linked. As a result, continuous wearable sensor data offer the potential to predict clinical measures. However, interruptions in the data collection occur, which create a need for strategic data imputation. Objective The objective of this work is to adapt a data generation algorithm to impute multivariate time series data. This will allow us to create digital behavior markers that can predict clinical health measures. Methods We created a bidirectional time series generative adversarial network to impute missing sensor readings. Values are imputed based on relationships between multiple fields and multiple points in time, for single time points or larger time gaps. From the complete data, digital behavior markers are extracted and are mapped to predicted clinical measures. Results We validate our approach using continuous smartwatch data for n = 14 participants. When reconstructing omitted data, we observe an average normalized mean absolute error of 0.0197. We then create machine learning models to predict clinical measures from the reconstructed, complete data with correlations ranging from r = 0.1230 to r = 0.7623. This work indicates that wearable sensor data collected in the wild can be used to offer insights on a person's health in natural settings.  more » « less
Award ID(s):
1954372
NSF-PAR ID:
10418197
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Methods of Information in Medicine
Volume:
61
Issue:
03/04
ISSN:
0026-1270
Page Range / eLocation ID:
099 to 110
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background

    As mobile health (mHealth) studies become increasingly productive owing to the advancements in wearable and mobile sensor technology, our ability to monitor and model human behavior will be constrained by participant receptivity. Many health constructs are dependent on subjective responses, and without such responses, researchers are left with little to no ground truth to accompany our ever-growing biobehavioral data. This issue can significantly impact the quality of a study, particularly for populations known to exhibit lower compliance rates. To address this challenge, researchers have proposed innovative approaches that use machine learning (ML) and sensor data to modify the timing and delivery of surveys. However, an overarching concern is the potential introduction of biases or unintended influences on participants’ responses when implementing new survey delivery methods.

    Objective

    This study aims to demonstrate the potential impact of an ML-based ecological momentary assessment (EMA) delivery system (using receptivity as the predictor variable) on the participants’ reported emotional state. We examine the factors that affect participants’ receptivity to EMAs in a 10-day wearable and EMA–based emotional state–sensing mHealth study. We study the physiological relationships indicative of receptivity and affect while also analyzing the interaction between the 2 constructs.

    Methods

    We collected data from 45 healthy participants wearing 2 devices measuring electrodermal activity, accelerometer, electrocardiography, and skin temperature while answering 10 EMAs daily, containing questions about perceived mood. Owing to the nature of our constructs, we can only obtain ground truth measures for both affect and receptivity during responses. Therefore, we used unsupervised and supervised ML methods to infer affect when a participant did not respond. Our unsupervised method used k-means clustering to determine the relationship between physiology and receptivity and then inferred the emotional state during nonresponses. For the supervised learning method, we primarily used random forest and neural networks to predict the affect of unlabeled data points as well as receptivity.

    Results

    Our findings showed that using a receptivity model to trigger EMAs decreased the reported negative affect by >3 points or 0.29 SDs in our self-reported affect measure, scored between 13 and 91. The findings also showed a bimodal distribution of our predicted affect during nonresponses. This indicates that this system initiates EMAs more commonly during states of higher positive emotions.

    Conclusions

    Our results showed a clear relationship between affect and receptivity. This relationship can affect the efficacy of an mHealth study, particularly those that use an ML algorithm to trigger EMAs. Therefore, we propose that future work should focus on a smart trigger that promotes EMA receptivity without influencing affect during sampled time points.

     
    more » « less
  2. New modes of technology are offering unprecedented opportunities to unobtrusively collect data about people's behavior. While there are many use cases for such information, we explore its utility for predicting multiple clinical assessment scores. Because clinical assessments are typically used as screening tools for impairment and disease, such as mild cognitive impairment (MCI), automatically mapping behavioral data to assessment scores can help detect changes in health and behavior across time. In this article, we aim to extract behavior markers from two modalities, a smart home environment and a custom digital memory notebook app, for mapping to 10 clinical assessments that are relevant for monitoring MCI onset and changes in cognitive health. Smart-home-based behavior markers reflect hourly, daily, and weekly activity patterns, while app-based behavior markers reflect app usage and writing content/style derived from free-form journal entries. We describe machine learning techniques for fusing these multimodal behavior markers and utilizing joint prediction. We evaluate our approach using three regression algorithms and data from 14 participants with MCI living in a smart-home environment. We observed moderate to large correlations between predicted and ground-truth assessment scores, ranging from r = 0.601 to r = 0.871 for each clinical assessment. 
    more » « less
  3. Background Inhibitory control, or inhibition, is one of the core executive functions of humans. It contributes to our attention, performance, and physical and mental well-being. Our inhibitory control is modulated by various factors and therefore fluctuates over time. Being able to continuously and unobtrusively assess our inhibitory control and understand the mediating factors may allow us to design intelligent systems that help manage our inhibitory control and ultimately our well-being. Objective The aim of this study is to investigate whether we can assess individuals’ inhibitory control using an unobtrusive and scalable approach to identify digital markers that are predictive of changes in inhibitory control. Methods We developed InhibiSense, an app that passively collects the following information: users’ behaviors based on their phone use and sensor data, the ground truths of their inhibition control measured with stop-signal tasks (SSTs) and ecological momentary assessments (EMAs), and heart rate information transmitted from a wearable heart rate monitor (Polar H10). We conducted a 4-week in-the-wild study, where participants were asked to install InhibiSense on their phone and wear a Polar H10. We used generalized estimating equation (GEE) and gradient boosting tree models fitted with features extracted from participants’ phone use and sensor data to predict their stop-signal reaction time (SSRT), an objective metric used to measure an individual’s inhibitory control, and identify the predictive digital markers. Results A total of 12 participants completed the study, and 2189 EMAs and SST responses were collected. The results from the GEE models suggest that the top digital markers positively associated with an individual’s SSRT include phone use burstiness (P=.005), the mean duration between 2 consecutive phone use sessions (P=.02), the change rate of battery level when the phone was not charged (P=.04), and the frequency of incoming calls (P=.03). The top digital markers negatively associated with SSRT include the standard deviation of acceleration (P<.001), the frequency of short phone use sessions (P<.001), the mean duration of incoming calls (P<.001), the mean decibel level of ambient noise (P=.007), and the percentage of time in which the phone was connected to the internet through a mobile network (P=.001). No significant correlation between the participants’ objective and subjective measurement of inhibitory control was found. Conclusions We identified phone-based digital markers that were predictive of changes in inhibitory control and how they were positively or negatively associated with a person’s inhibitory control. The results of this study corroborate the findings of previous studies, which suggest that inhibitory control can be assessed continuously and unobtrusively in the wild. We discussed some potential applications of the system and how technological interventions can be designed to help manage inhibitory control. 
    more » « less
  4. null (Ed.)
    Background Mobile health technology has demonstrated the ability of smartphone apps and sensors to collect data pertaining to patient activity, behavior, and cognition. It also offers the opportunity to understand how everyday passive mobile metrics such as battery life and screen time relate to mental health outcomes through continuous sensing. Impulsivity is an underlying factor in numerous physical and mental health problems. However, few studies have been designed to help us understand how mobile sensors and self-report data can improve our understanding of impulsive behavior. Objective The objective of this study was to explore the feasibility of using mobile sensor data to detect and monitor self-reported state impulsivity and impulsive behavior passively via a cross-platform mobile sensing application. Methods We enrolled 26 participants who were part of a larger study of impulsivity to take part in a real-world, continuous mobile sensing study over 21 days on both Apple operating system (iOS) and Android platforms. The mobile sensing system (mPulse) collected data from call logs, battery charging, and screen checking. To validate the model, we used mobile sensing features to predict common self-reported impulsivity traits, objective mobile behavioral and cognitive measures, and ecological momentary assessment (EMA) of state impulsivity and constructs related to impulsive behavior (ie, risk-taking, attention, and affect). Results Overall, the findings suggested that passive measures of mobile phone use such as call logs, battery charging, and screen checking can predict different facets of trait and state impulsivity and impulsive behavior. For impulsivity traits, the models significantly explained variance in sensation seeking, planning, and lack of perseverance traits but failed to explain motor, urgency, lack of premeditation, and attention traits. Passive sensing features from call logs, battery charging, and screen checking were particularly useful in explaining and predicting trait-based sensation seeking. On a daily level, the model successfully predicted objective behavioral measures such as present bias in delay discounting tasks, commission and omission errors in a cognitive attention task, and total gains in a risk-taking task. Our models also predicted daily EMA questions on positivity, stress, productivity, healthiness, and emotion and affect. Perhaps most intriguingly, the model failed to predict daily EMA designed to measure previous-day impulsivity using face-valid questions. Conclusions The study demonstrated the potential for developing trait and state impulsivity phenotypes and detecting impulsive behavior from everyday mobile phone sensors. Limitations of the current research and suggestions for building more precise passive sensing models are discussed. Trial Registration ClinicalTrials.gov NCT03006653; https://clinicaltrials.gov/ct2/show/NCT03006653 
    more » « less
  5. In this study, we introduce and validate a computational method to detect lifestyle change that occurs in response to a multi-domain healthy brain aging intervention. To detect behavior change, digital behavior markers are extracted from smartwatch sensor data and a permutation-based change detection algorithm quantifies the change in marker-based behavior from a pre-intervention, 1-week baseline. To validate the method, we verify that changes are successfully detected from synthetic data with known pattern differences. Next, we employ this method to detect overall behavior change for n = 28 brain health intervention subjects and n = 17 age-matched control subjects. For these individuals, we observe a monotonic increase in behavior change from the baseline week with a slope of 0.7460 for the intervention group and a slope of 0.0230 for the control group. Finally, we utilize a random forest algorithm to perform leave-one-subject-out prediction of intervention versus control subjects based on digital marker delta values. The random forest predicts whether the subject is in the intervention or control group with an accuracy of 0.87. This work has implications for capturing objective, continuous data to inform our understanding of intervention adoption and impact. 
    more » « less