skip to main content

Title: Using Smartphone Sensor Data to Assess Inhibitory Control in the Wild: Longitudinal Study
Background Inhibitory control, or inhibition, is one of the core executive functions of humans. It contributes to our attention, performance, and physical and mental well-being. Our inhibitory control is modulated by various factors and therefore fluctuates over time. Being able to continuously and unobtrusively assess our inhibitory control and understand the mediating factors may allow us to design intelligent systems that help manage our inhibitory control and ultimately our well-being. Objective The aim of this study is to investigate whether we can assess individuals’ inhibitory control using an unobtrusive and scalable approach to identify digital markers that are predictive of changes in inhibitory control. Methods We developed InhibiSense, an app that passively collects the following information: users’ behaviors based on their phone use and sensor data, the ground truths of their inhibition control measured with stop-signal tasks (SSTs) and ecological momentary assessments (EMAs), and heart rate information transmitted from a wearable heart rate monitor (Polar H10). We conducted a 4-week in-the-wild study, where participants were asked to install InhibiSense on their phone and wear a Polar H10. We used generalized estimating equation (GEE) and gradient boosting tree models fitted with features extracted from participants’ phone use and sensor data to more » predict their stop-signal reaction time (SSRT), an objective metric used to measure an individual’s inhibitory control, and identify the predictive digital markers. Results A total of 12 participants completed the study, and 2189 EMAs and SST responses were collected. The results from the GEE models suggest that the top digital markers positively associated with an individual’s SSRT include phone use burstiness (P=.005), the mean duration between 2 consecutive phone use sessions (P=.02), the change rate of battery level when the phone was not charged (P=.04), and the frequency of incoming calls (P=.03). The top digital markers negatively associated with SSRT include the standard deviation of acceleration (P<.001), the frequency of short phone use sessions (P<.001), the mean duration of incoming calls (P<.001), the mean decibel level of ambient noise (P=.007), and the percentage of time in which the phone was connected to the internet through a mobile network (P=.001). No significant correlation between the participants’ objective and subjective measurement of inhibitory control was found. Conclusions We identified phone-based digital markers that were predictive of changes in inhibitory control and how they were positively or negatively associated with a person’s inhibitory control. The results of this study corroborate the findings of previous studies, which suggest that inhibitory control can be assessed continuously and unobtrusively in the wild. We discussed some potential applications of the system and how technological interventions can be designed to help manage inhibitory control. « less
Authors:
; ; ;
Award ID(s):
1840025
Publication Date:
NSF-PAR ID:
10339863
Journal Name:
JMIR mHealth and uHealth
Volume:
8
Issue:
12
Page Range or eLocation-ID:
e21703
ISSN:
2291-5222
Sponsoring Org:
National Science Foundation
More Like this
  1. Background With nearly 20% of the US adult population using fitness trackers, there is an increasing focus on how physiological data from these devices can provide actionable insights about workplace performance. However, in-the-wild studies that understand how these metrics correlate with cognitive performance measures across a diverse population are lacking, and claims made by device manufacturers are vague. While there has been extensive research leading to a variety of theories on how physiological measures affect cognitive performance, virtually all such studies have been conducted in highly controlled settings and their validity in the real world is poorly understood. Objective We seek to bridge this gap by evaluating prevailing theories on the effects of a variety of sleep, activity, and heart rate parameters on cognitive performance against data collected in real-world settings. Methods We used a Fitbit Charge 3 and a smartphone app to collect different physiological and neurobehavioral task data, respectively, as part of our 6-week-long in-the-wild study. We collected data from 24 participants across multiple population groups (shift workers, regular workers, and graduate students) on different performance measures (vigilant attention and cognitive throughput). Simultaneously, we used a fitness tracker to unobtrusively obtain physiological measures that could influence these performancemore »measures, including over 900 nights of sleep and over 1 million minutes of heart rate and physical activity metrics. We performed a repeated measures correlation (rrm) analysis to investigate which sleep and physiological markers show association with each performance measure. We also report how our findings relate to existing theories and previous observations from controlled studies. Results Daytime alertness was found to be significantly correlated with total sleep duration on the previous night (rrm=0.17, P<.001) as well as the duration of rapid eye movement (rrm=0.12, P<.001) and light sleep (rrm=0.15, P<.001). Cognitive throughput, by contrast, was not found to be significantly correlated with sleep duration but with sleep timing—a circadian phase shift toward a later sleep time corresponded with lower cognitive throughput on the following day (rrm=–0.13, P<.001). Both measures show circadian variations, but only alertness showed a decline (rrm=–0.1, P<.001) as a result of homeostatic pressure. Both heart rate and physical activity correlate positively with alertness as well as cognitive throughput. Conclusions Our findings reveal that there are significant differences in terms of which sleep-related physiological metrics influence each of the 2 performance measures. This makes the case for more targeted in-the-wild studies investigating how physiological measures from self-tracking data influence, or can be used to predict, specific aspects of cognitive performance.« less
  2. Background The physical and emotional well-being of women is critical for healthy pregnancy and birth outcomes. The Two Happy Hearts intervention is a personalized mind-body program coached by community health workers that includes monitoring and reflecting on personal health, as well as practicing stress management strategies such as mindful breathing and movement. Objective The aims of this study are to (1) test the daily use of a wearable device to objectively measure physical and emotional well-being along with subjective assessments during pregnancy, and (2) explore the user’s engagement with the Two Happy Hearts intervention prototype, as well as understand their experiences with various intervention components. Methods A case study with a mixed design was used. We recruited a 29-year-old woman at 33 weeks of gestation with a singleton pregnancy. She had no medical complications or physical restrictions, and she was enrolled in the Medi-Cal public health insurance plan. The participant engaged in the Two Happy Hearts intervention prototype from her third trimester until delivery. The Oura smart ring was used to continuously monitor objective physical and emotional states, such as resting heart rate, resting heart rate variability, sleep, and physical activity. In addition, the participant self-reported her physical and emotionalmore »health using the Two Happy Hearts mobile app–based 24-hour recall surveys (sleep quality and level of physical activity) and ecological momentary assessment (positive and negative emotions), as well as the Perceived Stress Scale, Center for Epidemiologic Studies Depression Scale, and State-Trait Anxiety Inventory. Engagement with the Two Happy Hearts intervention was recorded via both the smart ring and phone app, and user experiences were collected via Research Electronic Data Capture satisfaction surveys. Objective data from the Oura ring and subjective data on physical and emotional health were described. Regression plots and Pearson correlations between the objective and subjective data were presented, and content analysis was performed for the qualitative data. Results Decreased resting heart rate was significantly correlated with increased heart rate variability (r=–0.92, P<.001). We found significant associations between self-reported responses and Oura ring measures: (1) positive emotions and heart rate variability (r=0.54, P<.001), (2) sleep quality and sleep score (r=0.52, P<.001), and (3) physical activity and step count (r=0.77, P<.001). In addition, deep sleep appeared to increase as light and rapid eye movement sleep decreased. The psychological measures of stress, depression, and anxiety appeared to decrease from baseline to post intervention. Furthermore, the participant had a high completion rate of the components of the Two Happy Hearts intervention prototype and shared several positive experiences, such as an increased self-efficacy and a normal delivery. Conclusions The Two Happy Hearts intervention prototype shows promise for potential use by underserved pregnant women.« less
  3. The stop signal task (SST) is the gold standard experimental model of inhibitory control. However, neither SST condition–contrast (stop vs. go, successful vs. failed stop) purely operationalizes inhibition. Because stop trials include a second, infrequent signal, the stop versus go contrast confounds inhibition with attentional and stimulus processing demands. While this confound is controlled for in the successful versus failed stop contrast, the go process is systematically faster on failed stop trials, contaminating the contrast with a different noninhibitory confound. Here, we present an SST variant to address both confounds and evaluate putative neural indices of inhibition with these influences removed. In our variant, stop signals occurred on every trial, equating the noninhibitory demands of the stop versus go contrast. To entice participants to respond despite the impending stop signals, responses produced before stop signals were rewarded. This also reversed the go process bias that typically affects the successful versus failed stop contrast. We recorded scalp electroencephalography in this new version of the task (as well as a standard version of the SST with infrequent stop signal) and found that, even under these conditions, the properties of the frontocentral stop signal P3 ERP remained consistent with the race model. Specifically,more »in both tasks, the amplitude of the P3 was increased on stop versus go trials. Moreover, the onset of this P3 occurred earlier for successful compared with failed stop trials in both tasks, consistent with the proposal of the race model that an earlier start of the inhibition process will increase stopping success. Therefore, the frontocentral stop signal P3 represents a neural process whose properties are in line with the predictions of the race model of motor inhibition, even when the SST's confounds are controlled.« less
  4. Obeid, I. (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »have access to such data resources must rely on techniques in which existing models can be adapted to new datasets [6]. A preliminary version of this breast corpus release was tested in a pilot study using a baseline machine learning system, ResNet18 [7], that leverages several open-source Python tools. The pilot corpus was divided into three sets: train, development, and evaluation. Portions of these slides were manually annotated [1] using the nine labels in Table 1 [8] to identify five to ten examples of pathological features on each slide. Not every pathological feature is annotated, meaning excluded areas can include focuses particular to these labels that are not used for training. A summary of the number of patches within each label is given in Table 2. To maintain a balanced training set, 1,000 patches of each label were used to train the machine learning model. Throughout all sets, only annotated patches were involved in model development. The performance of this model in identifying all the patches in the evaluation set can be seen in the confusion matrix of classification accuracy in Table 3. The highest performing labels were background, 97% correct identification, and artifact, 76% correct identification. A correlation exists between labels with more than 6,000 development patches and accurate performance on the evaluation set. Additionally, these results indicated a need to further refine the annotation of invasive ductal carcinoma (“indc”), inflammation (“infl”), nonneoplastic features (“nneo”), normal (“norm”) and suspicious (“susp”). This pilot experiment motivated changes to the corpus that will be discussed in detail in this poster presentation. To increase the accuracy of the machine learning model, we modified how we addressed underperforming labels. One common source of error arose with how non-background labels were converted into patches. Large areas of background within other labels were isolated within a patch resulting in connective tissue misrepresenting a non-background label. In response, the annotation overlay margins were revised to exclude benign connective tissue in non-background labels. Corresponding patient reports and supporting immunohistochemical stains further guided annotation reviews. The microscopic diagnoses given by the primary pathologist in these reports detail the pathological findings within each tissue site, but not within each specific slide. The microscopic diagnoses informed revisions specifically targeting annotated regions classified as cancerous, ensuring that the labels “indc” and “dcis” were used only in situations where a micropathologist diagnosed it as such. Further differentiation of cancerous and precancerous labels, as well as the location of their focus on a slide, could be accomplished with supplemental immunohistochemically (IHC) stained slides. When distinguishing whether a focus is a nonneoplastic feature versus a cancerous growth, pathologists employ antigen targeting stains to the tissue in question to confirm the diagnosis. For example, a nonneoplastic feature of usual ductal hyperplasia will display diffuse staining for cytokeratin 5 (CK5) and no diffuse staining for estrogen receptor (ER), while a cancerous growth of ductal carcinoma in situ will have negative or focally positive staining for CK5 and diffuse staining for ER [9]. Many tissue samples contain cancerous and non-cancerous features with morphological overlaps that cause variability between annotators. The informative fields IHC slides provide could play an integral role in machine model pathology diagnostics. Following the revisions made on all the annotations, a second experiment was run using ResNet18. Compared to the pilot study, an increase of model prediction accuracy was seen for the labels indc, infl, nneo, norm, and null. This increase is correlated with an increase in annotated area and annotation accuracy. Model performance in identifying the suspicious label decreased by 25% due to the decrease of 57% in the total annotated area described by this label. A summary of the model performance is given in Table 4, which shows the new prediction accuracy and the absolute change in error rate compared to Table 3. The breast tissue subset we are developing includes 3,505 annotated breast pathology slides from 296 patients. The average size of a scanned SVS file is 363 MB. The annotations are stored in an XML format. A CSV version of the annotation file is also available which provides a flat, or simple, annotation that is easy for machine learning researchers to access and interface to their systems. Each patient is identified by an anonymized medical reference number. Within each patient’s directory, one or more sessions are identified, also anonymized to the first of the month in which the sample was taken. These sessions are broken into groupings of tissue taken on that date (in this case, breast tissue). A deidentified patient report stored as a flat text file is also available. Within these slides there are a total of 16,971 total annotated regions with an average of 4.84 annotations per slide. Among those annotations, 8,035 are non-cancerous (normal, background, null, and artifact,) 6,222 are carcinogenic signs (inflammation, nonneoplastic and suspicious,) and 2,714 are cancerous labels (ductal carcinoma in situ and invasive ductal carcinoma in situ.) The individual patients are split up into three sets: train, development, and evaluation. Of the 74 cancerous patients, 20 were allotted for both the development and evaluation sets, while the remain 34 were allotted for train. The remaining 222 patients were split up to preserve the overall distribution of labels within the corpus. This was done in hope of creating control sets for comparable studies. Overall, the development and evaluation sets each have 80 patients, while the training set has 136 patients. In a related component of this project, slides from the Fox Chase Cancer Center (FCCC) Biosample Repository (https://www.foxchase.org/research/facilities/genetic-research-facilities/biosample-repository -facility) are being digitized in addition to slides provided by Temple University Hospital. This data includes 18 different types of tissue including approximately 38.5% urinary tissue and 16.5% gynecological tissue. These slides and the metadata provided with them are already anonymized and include diagnoses in a spreadsheet with sample and patient ID. We plan to release over 13,000 unannotated slides from the FCCC Corpus simultaneously with v1.0.0 of TUDP. Details of this release will also be discussed in this poster. Few digitally annotated databases of pathology samples like TUDP exist due to the extensive data collection and processing required. The breast corpus subset should be released by November 2021. By December 2021 we should also release the unannotated FCCC data. We are currently annotating urinary tract data as well. We expect to release about 5,600 processed TUH slides in this subset. We have an additional 53,000 unprocessed TUH slides digitized. Corpora of this size will stimulate the development of a new generation of deep learning technology. In clinical settings where resources are limited, an assistive diagnoses model could support pathologists’ workload and even help prioritize suspected cancerous cases. ACKNOWLEDGMENTS This material is supported by the National Science Foundation under grants nos. CNS-1726188 and 1925494. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] N. Shawki et al., “The Temple University Digital Pathology Corpus,” in Signal Processing in Medicine and Biology: Emerging Trends in Research and Applications, 1st ed., I. Obeid, I. Selesnick, and J. Picone, Eds. New York City, New York, USA: Springer, 2020, pp. 67 104. https://www.springer.com/gp/book/9783030368432. [2] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning.” Major Research Instrumentation (MRI), Division of Computer and Network Systems, Award No. 1726188, January 1, 2018 – December 31, 2021. https://www. isip.piconepress.com/projects/nsf_dpath/. [3] A. Gulati et al., “Conformer: Convolution-augmented Transformer for Speech Recognition,” in Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2020, pp. 5036-5040. https://doi.org/10.21437/interspeech.2020-3015. [4] C.-J. Wu et al., “Machine Learning at Facebook: Understanding Inference at the Edge,” in Proceedings of the IEEE International Symposium on High Performance Computer Architecture (HPCA), 2019, pp. 331–344. https://ieeexplore.ieee.org/document/8675201. [5] I. Caswell and B. Liang, “Recent Advances in Google Translate,” Google AI Blog: The latest from Google Research, 2020. [Online]. Available: https://ai.googleblog.com/2020/06/recent-advances-in-google-translate.html. [Accessed: 01-Aug-2021]. [6] V. Khalkhali, N. Shawki, V. Shah, M. Golmohammadi, I. Obeid, and J. Picone, “Low Latency Real-Time Seizure Detection Using Transfer Deep Learning,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2021, pp. 1 7. https://www.isip. piconepress.com/publications/conference_proceedings/2021/ieee_spmb/eeg_transfer_learning/. [7] J. Picone, T. Farkas, I. Obeid, and Y. Persidsky, “MRI: High Performance Digital Pathology Using Big Data and Machine Learning,” Philadelphia, Pennsylvania, USA, 2020. https://www.isip.piconepress.com/publications/reports/2020/nsf/mri_dpath/. [8] I. Hunt, S. Husain, J. Simons, I. Obeid, and J. Picone, “Recent Advances in the Temple University Digital Pathology Corpus,” in Proceedings of the IEEE Signal Processing in Medicine and Biology Symposium (SPMB), 2019, pp. 1–4. https://ieeexplore.ieee.org/document/9037859. [9] A. P. Martinez, C. Cohen, K. Z. Hanley, and X. (Bill) Li, “Estrogen Receptor and Cytokeratin 5 Are Reliable Markers to Separate Usual Ductal Hyperplasia From Atypical Ductal Hyperplasia and Low-Grade Ductal Carcinoma In Situ,” Arch. Pathol. Lab. Med., vol. 140, no. 7, pp. 686–689, Apr. 2016. https://doi.org/10.5858/arpa.2015-0238-OA.« less
  5. Green walls have been used in built environments as a natural element to bring various benefits, thus improving human health and well-being. However, in conventional virtual environments, the visual connection with a green wall is the only way that this natural element could benefit humans. Unfortunately, the impact of such visual connection on human thermal perception is still not well understood. Thus, we conducted an experimental study with 40 participants comparing the thermal state of two virtual sessions: biophilic (a room with a green wall) and non-biophilic (the same room without a green wall). Both sessions were conducted in a climate chamber under a slightly warm condition (28.89 °C and 50% relative humidity). Participants’ thermal state, skin temperature, and heart rate data were collected. According to the results, participants’ thermal comfort and hand skin temperature were significantly different between the two sessions, and their mean skin temperature was statistically increased over time. The study suggests that before the extent to which the impact of visual stimuli (e.g., green walls) on thermal perception is fully understood, researchers may need to control visual and thermal stimuli separately when using them in immersive virtual environments. Furthermore, the virtual exposure time should be anmore »important consideration when designing experimental procedures.« less