skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Patient-Independent Schizophrenia Relapse Prediction Using Mobile Sensor Based Daily Behavioral Rhythm Changes.
A schizophrenia relapse has severe consequences for a patient’s health, work, and sometimes even life safety. If an oncoming relapse can be predicted on time, for example by detecting early behavioral changes in patients, then interventions could be provided to prevent the relapse. In this work, we investigated a machine learning based schizophrenia relapse prediction model using mobile sensing data to characterize behavioral features. A patient-independent model providing sequential predictions, closely representing the clinical deployment scenario for relapse prediction, was evaluated. The model uses the mobile sensing data from the recent four weeks to predict an oncoming relapse in the next week. We used the behavioral rhythm features extracted from daily templates of mobile sensing data, self-reported symptoms collected via EMA (Ecological Momentary Assessment), and demographics to compare different classifiers for the relapse prediction. Naive Bayes based model gave the best results with an F2 score of 0.083 when evaluated in a dataset consisting of 63 schizophrenia patients, each monitored for up to a year. The obtained F2 score, though low, is better than the baseline performance of random classification (F2 score of 0.02 ± 0.024). Thus, mobile sensing has predictive value for detecting an oncoming relapse and needs further investigation to improve the current performance. Towards that end, further feature engineering and model personalization based on the behavioral idiosyncrasies of a patient could be helpful.  more » « less
Award ID(s):
1840167
PAR ID:
10289960
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Wireless Mobile Communication and Healthcare. MobiHealth 2020. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Existing pain assessment methods in the intensive care unit rely on patient self-report or visual observation by nurses. Patient self-report is subjective and can suffer from poor recall. In the case of non-verbal patients, behavioral pain assessment methods provide limited granularity, are subjective, and put additional burden on already overworked staff. Previous studies have shown the feasibility of autonomous pain expression assessment by detecting Facial Action Units (AUs). However, previous approaches for detecting facial pain AUs are historically limited to controlled environments. In this study, for the first time, we collected and annotated a pain-related AU dataset, Pain-ICU, containing 55,085 images from critically ill adult patients. We evaluated the performance of OpenFace, an open-source facial behavior analysis tool, and the trained AU R-CNN model on our Pain-ICU dataset. Variables such as assisted breathing devices, environmental lighting, and patient orientation with respect to the camera make AU detection harder than with controlled settings. Although OpenFace has shown state-of-the-art results in general purpose AU detection tasks, it could not accurately detect AUs in our Pain-ICU dataset (F1-score 0.42). To address this problem, we trained the AU R-CNN model on our Pain-ICU dataset, resulting in a satisfactory average F1-score 0.77. In this study, we show the feasibility of detecting facial pain AUs in uncontrolled ICU settings. 
    more » « less
  2. Background Adaptive CD19-targeted chimeric antigen receptor (CAR) T-cell transfer has become a promising treatment for leukemia. Although patient responses vary across different clinical trials, reliable methods to dissect and predict patient responses to novel therapies are currently lacking. Recently, the depiction of patient responses has been achieved using in silico computational models, with prediction application being limited. Methods We established a computational model of CAR T-cell therapy to recapitulate key cellular mechanisms and dynamics during treatment with responses of continuous remission (CR), non-response (NR), and CD19-positive (CD19 + ) and CD19-negative (CD19 − ) relapse. Real-time CAR T-cell and tumor burden data of 209 patients were collected from clinical studies and standardized with unified units in bone marrow. Parameter estimation was conducted using the stochastic approximation expectation maximization algorithm for nonlinear mixed-effect modeling. Results We revealed critical determinants related to patient responses at remission, resistance, and relapse. For CR, NR, and CD19 + relapse, the overall functionality of CAR T-cell led to various outcomes, whereas loss of the CD19 + antigen and the bystander killing effect of CAR T-cells may partly explain the progression of CD19 − relapse. Furthermore, we predicted patient responses by combining the peak and accumulated values of CAR T-cells or by inputting early-stage CAR T-cell dynamics. A clinical trial simulation using virtual patient cohorts generated based on real clinical patient datasets was conducted to further validate the prediction. Conclusions Our model dissected the mechanism behind distinct responses of leukemia to CAR T-cell therapy. This patient-based computational immuno-oncology model can predict late responses and may be informative in clinical treatment and management. 
    more » « less
  3. Abstract Schizophrenia is a severe and complex psychiatric disorder with heterogeneous and dynamic multi-dimensional symptoms. Behavioral rhythms, such as sleep rhythm, are usually disrupted in people with schizophrenia. As such, behavioral rhythm sensing with smartphones and machine learning can help better understand and predict their symptoms. Our goal is to predict fine-grained symptom changes with interpretable models. We computed rhythm-based features from 61 participants with 6,132 days of data and used multi-task learning to predict their ecological momentary assessment scores for 10 different symptom items. By taking into account both the similarities and differences between different participants and symptoms, our multi-task learning models perform statistically significantly better than the models trained with single-task learning for predicting patients’ individual symptom trajectories, such as feeling depressed, social, and calm and hearing voices. We also found different subtypes for each of the symptoms by applying unsupervised clustering to the feature weights in the models. Taken together, compared to the features used in the previous studies, our rhythm features not only improved models’ prediction accuracy but also provided better interpretability for how patients’ behavioral rhythms and the rhythms of their environments influence their symptom conditions. This will enable both the patients and clinicians to monitor how these factors affect a patient’s condition and how to mitigate the influence of these factors. As such, we envision that our solution allows early detection and early intervention before a patient’s condition starts deteriorating without requiring extra effort from patients and clinicians. 
    more » « less
  4. Abstract BackgroundAlzheimer’s Disease (AD) is a widespread neurodegenerative disease with Mild Cognitive Impairment (MCI) acting as an interim phase between normal cognitive state and AD. The irreversible nature of AD and the difficulty in early prediction present significant challenges for patients, caregivers, and the healthcare sector. Deep learning (DL) methods such as Recurrent Neural Networks (RNN) have been utilized to analyze Electronic Health Records (EHR) to model disease progression and predict diagnosis. However, these models do not address some inherent irregularities in EHR data such as irregular time intervals between clinical visits. Furthermore, most DL models are not interpretable. To address these issues, we developed a novel DL architecture called Time‐Aware RNN (TA‐RNN) to predict MCI to AD conversion at the next clinical visit. MethodTA‐RNN comprises of a time embedding layer, attention‐based RNN, and prediction layer based on multi‐layer perceptron (MLP) (Figure 1). For interpretability, a dual‐level attention mechanism within the RNN identifies significant visits and features impacting predictions. TA‐RNN addresses irregular time intervals by incorporating time embedding into longitudinal cognitive and neuroimaging data based on attention weights to create a patient embedding. The MLP, trained on demographic data and the patient embedding predicts AD conversion. TA‐RNN was evaluated on Alzheimer’s Disease Neuroimaging Initiative (ADNI) and National Alzheimer’s Coordinating Center (NACC) datasets based on F2 score and sensitivity. ResultMultiple TA‐RNN models were trained with two, three, five, or six visits to predict the diagnosis at the next visit. In one setup, the models were trained and tested on ADNI. In another setup, the models were trained on the entire ADNI dataset and evaluated on the entire NACC dataset. The results indicated superior performance of TA‐RNN compared to state‐of‐the‐art (SOTA) and baseline approaches for both setups (Figure 2A and 2B). Based on attention weights, we also highlighted significant visits (Figure 3A) and features (Figure 3B) and observed that CDRSB and FAQ features and the most recent visit had highest influence in predictions. ConclusionWe propose TA‐RNN, an interpretable model to predict MCI to AD conversion while handling irregular time intervals. TA‐RNN outperformed SOTA and baseline methods in multiple experiments. 
    more » « less
  5. null (Ed.)
    Background Wearable technology, such as smartwatches, can capture valuable patient-generated data and help inform patient care. Electronic health records provide logical and practical platforms for including such data, but it is necessary to evaluate the way the data are presented and visualized. Objective The aim of this study is to evaluate a graphical interface that displays patients’ health data from smartwatches, mimicking the integration within the environment of electronic health records. Methods A total of 12 health care professionals evaluated a simulated interface using a usability scale questionnaire, testing the clarity of the interface, colors, usefulness of information, navigation, and readability of text. Results The interface was positively received, with 14 out of the 16 questions generating a score of 5 or greater among at least 75% of participants (9/12). On an 8-point Likert scale, the highest rated features of the interface were quick turnaround times (mean score 7.1), readability of the text (mean score 6.8), and use of terminology/abbreviations (mean score 6.75). Conclusions Collaborating with health care professionals to develop and refine a graphical interface for visualizing patients’ health data from smartwatches revealed that the key elements of the interface were acceptable. The implementation of such data from smartwatches and other mobile devices within electronic health records should consider the opinions of key stakeholders as the development of this platform progresses. 
    more » « less