Long-term and high-dose prescription opioid use places individuals at risk for opioid misuse, opioid use disorder (OUD), and overdose. Existing methods for monitoring opioid use and detecting misuse rely on self-reports, which are prone to reporting bias, and toxicology testing, which may be infeasible in outpatient settings. Although wearable technologies for monitoring day-to-day health metrics have gained significant traction in recent years due to their ease of use, flexibility, and advancements in sensor technology, their application within the opioid use space remains underexplored. In the current work, we demonstrate that oral opioid administrations can be detected using physiological signals collected from a wrist sensor. More importantly, we show that models informed by opioid pharmacokinetics increase reliability in predicting the timing of opioid administrations. Forty-two individuals who were prescribed opioids as a part of their medical treatment in-hospital and after discharge were enrolled. Participants wore a wrist sensor throughout the study, while opioid administrations were tracked using electronic medical records and self-reports. We collected 1,983 hours of sensor data containing 187 opioid administrations from the inpatient setting and 927 hours of sensor data containing 40 opioid administrations from the outpatient setting. We demonstrate that a self-supervised pre-trained model, capable of learning the canonical time series of plasma concentration of the drug derived from opioid pharmacokinetics, can reliably detect opioid administration in both settings. Our work suggests the potential of pharmacokinetic-informed, data-driven models to objectively detect opioid use in daily life. 
                        more » 
                        « less   
                    
                            
                            Impact of individual and treatment characteristics on wearable sensor-based digital biomarkers of opioid use
                        
                    
    
            Abstract Opioid use disorder is one of the most pressing public health problems of our time. Mobile health tools, including wearable sensors, have great potential in this space, but have been underutilized. Of specific interest are digital biomarkers, or end-user generated physiologic or behavioral measurements that correlate with health or pathology. The current manuscript describes a longitudinal, observational study of adult patients receiving opioid analgesics for acute painful conditions. Participants in the study are monitored with a wrist-worn E4 sensor, during which time physiologic parameters (heart rate/variability, electrodermal activity, skin temperature, and accelerometry) are collected continuously. Opioid use events are recorded via electronic medical record and self-report. Three-hundred thirty-nine discreet dose opioid events from 36 participant are analyzed among 2070 h of sensor data. Fifty-one features are extracted from the data and initially compared pre- and post-opioid administration, and subsequently are used to generate machine learning models. Model performance is compared based on individual and treatment characteristics. The best performing machine learning model to detect opioid administration is a Channel-Temporal Attention-Temporal Convolutional Network (CTA-TCN) model using raw data from the wearable sensor. History of intravenous drug use is associated with better model performance, while middle age, and co-administration of non-narcotic analgesia or sedative drugs are associated with worse model performance. These characteristics may be candidate input features for future opioid detection model iterations. Once mature, this technology could provide clinicians with actionable data on opioid use patterns in real-world settings, and predictive analytics for early identification of opioid use disorder risk. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2124282
- PAR ID:
- 10370298
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- npj Digital Medicine
- Volume:
- 5
- Issue:
- 1
- ISSN:
- 2398-6352
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Opioid use disorder is a medical condition with major social and economic consequences. While ubiquitous physiological sensing technologies have been widely adopted and extensively used to monitor day-to-day activities and deliver targeted interventions to improve human health, the use of these technologies to detect drug use in natural environments has been largely underexplored. The long-term goal of our work is to develop a mobile technology system that can identify high-risk opioid-related events (i.e., development of tolerance in the setting of prescription opioid use, return-to-use events in the setting of opioid use disorder) and deploy just-in-time interventions to mitigate the risk of overdose morbidity and mortality. In the current paper, we take an initial step by asking a crucial question: Can opioid use be detected using physiological signals obtained from a wrist-mounted sensor? Thirty-six individuals who were admitted to the hospital for an acute painful condition and received opioid analgesics as part of their clinical care were enrolled. Subjects wore a noninvasive wrist sensor during this time (1-14 days) that continuously measured physiological signals (heart rate, skin temperature, accelerometry, electrodermal activity, and interbeat interval). We collected a total of 2070 hours (≈ 86 days) of physiological data and observed a total of 339 opioid administrations. Our results are encouraging and show that using a Channel-Temporal Attention TCN (CTA-TCN) model, we can detect an opioid administration in a time-window with an F1-score of 0.80, a specificity of 0.77, sensitivity of 0.80, and an AUC of 0.77. We also predict the exact moment of administration in this time-window with a normalized mean absolute error of 8.6% and R2 coefficient of 0.85.more » « less
- 
            Abstract AimsTo develop machine‐learning algorithms for predicting the risk of a hospitalization or emergency department (ED) visit for opioid use disorder (OUD) (i.e. OUD acute events) in Pennsylvania Medicaid enrollees in the Opioid Use Disorder Centers of Excellence (COE) program and to evaluate the fairness of model performance across racial groups. MethodsWe studied 20 983 United States Medicaid enrollees aged 18 years or older who had COE visits between April 2019 and March 2021. We applied multivariate logistic regression, least absolute shrinkage and selection operator models, random forests, and eXtreme Gradient Boosting (XGB), to predict OUD acute events following the initial COE visit. Our models included predictors at the system, patient, and regional levels. We assessed model performance using multiple metrics by racial groups. Individuals were divided into a low, medium and high‐risk group based on predicted risk scores. ResultsThe training (n = 13 990) and testing (n = 6993) samples displayed similar characteristics (mean age 38.1 ± 9.3 years, 58% male, 80% White enrollees) with 4% experiencing OUD acute events at baseline. XGB demonstrated the best prediction performance (C‐statistic = 76.6% [95% confidence interval = 75.6%–77.7%] vs. 72.8%–74.7% for other methods). At the balanced cutoff, XGB achieved a sensitivity of 68.2%, specificity of 70.0%, and positive predictive value of 8.3%. The XGB model classified the testing sample into high‐risk (6%), medium‐risk (30%), and low‐risk (63%) groups. In the high‐risk group, 40.7% had OUD acute events vs. 16.5% and 5.0% in the medium‐ and low‐risk groups. The high‐ and medium‐risk groups captured 44% and 26% of individuals with OUD events. The XGB model exhibited lower false negative rates and higher false positive rates in racial/ethnic minority groups than White enrollees. ConclusionsNew machine‐learning algorithms perform well to predict risks of opioid use disorder (OUD) acute care use among United States Medicaid enrollees and improve fairness of prediction across racial and ethnic groups compared with previous OUD‐related models.more » « less
- 
            In this study, we introduce BedDot, the first contact-free and bed-mounted continuous blood pressure monitoring sensor. Equipped with a seismic sensor, BedDot eliminates the need for external wearable devices and physical contact, while avoiding privacy or radiation concerns associated with other technologies such as cameras or radars. Using advanced preprocessing techniques and innovative AI algorithms, we extract time-series features from the collected bedseismogram signals and accurately estimate blood pressure with remarkable stability and robustness. Our user-friendly prototype has been tested with over 75 participants, demonstrating exceptional performance that meets all three major industry standards, which are the Association for the Advancement of Medical Instrumentation (AAMI) and Food and Drug Administration (FDA), and outperforms current state-of-the-art deep learning models for time series analysis. As a non-invasive solution for monitoring blood pressure during sleep and assessing cardiovascular health, BedDot holds immense potential for revolutionizing the field.more » « less
- 
            Estimating human joint moments using wearable sensors has utility for personalized health monitoring and generalized exoskeleton control. Data-driven models have potential to map wearable sensor data to human joint moments, even with a reduced sensor suite and without subject-specific calibration. In this study, we quantified the RMSE and R 2 of a temporal convolutional network (TCN), trained to estimate human hip moments in the sagittal plane using exoskeleton sensor data (i.e., a hip encoder and thigh- and pelvis-mounted inertial measurement units). We conducted three analyses in which we iteratively retrained the network while: 1) varying the input sequence length of the model, 2) incorporating noncausal data into the input sequence, thus delaying the network estimates, and 3) time shifting the labels to train the model to anticipate (i.e., predict) human hip moments. We found that 930 ms of causal input data maintained model performance while minimizing input sequence length (validation RMSE and R 2 of 0.141±0.014 Nm/kg and 0.883±0.025, respectively). Further, delaying the model estimate by up to 200 ms significantly improved model performance compared to the best causal estimators (p<0.05), improving estimator fidelity in use cases where delayed estimates are acceptable (e.g., in personalized health monitoring or diagnoses). Finally, we found that anticipating hip moments further in time linearly increased model RMSE and decreased R 2 (p<0.05); however, performance remained strong (R 2 >0.85) when predicting up to 200 ms ahead.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
