Abstract IntroductionStudies investigating the relationship between blood pressure (BP) measurements from electronic health records (EHRs) and Alzheimer's disease (AD) rely on summary statistics, like BP variability, and have only been validated at a single institution. We hypothesize that leveraging BP trajectories can accurately estimate AD risk across different populations. MethodsIn a retrospective cohort study, EHR data from Veterans Affairs (VA) patients were used to train and internally validate a machine learning model to predict AD onset within 5 years. External validation was conducted on patients from Michigan Medicine (MM). ResultsThe VA and MM cohorts included 6860 and 1201 patients, respectively. Model performance using BP trajectories was modest but comparable (area under the receiver operating characteristic curve [AUROC] = 0.64 [95% confidence interval (CI) = 0.54–0.73] for VA vs. AUROC = 0.66 [95% CI = 0.55–0.76] for MM). ConclusionApproaches that directly leverage BP trajectories from EHR data could aid in AD risk stratification across institutions. 
                        more » 
                        « less   
                    
                            
                            Reformulating patient stratification for targeting interventions by accounting for severity of downstream outcomes resulting from disease onset: a case study in sepsis
                        
                    
    
            Abstract ObjectivesTo quantify differences between (1) stratifying patients by predicted disease onset risk alone and (2) stratifying by predicted disease onset risk and severity of downstream outcomes. We perform a case study of predicting sepsis. Materials and MethodsWe performed a retrospective analysis using observational data from Michigan Medicine at the University of Michigan (U-M) between 2016 and 2020 and the Beth Israel Deaconess Medical Center (BIDMC) between 2008 and 2012. We measured the correlation between the estimated sepsis risk and the estimated effect of sepsis on mortality using Spearman’s correlation. We compared patients stratified by sepsis risk with patients stratified by sepsis risk and effect of sepsis on mortality. ResultsThe U-M and BIDMC cohorts included 7282 and 5942 ICU visits; 7.9% and 8.1% developed sepsis, respectively. Among visits with sepsis, 21.9% and 26.3% experienced mortality at U-M and BIDMC. The effect of sepsis on mortality was weakly correlated with sepsis risk (U-M: 0.35 [95% CI: 0.33-0.37], BIDMC: 0.31 [95% CI: 0.28-0.34]). High-risk patients identified by both stratification approaches overlapped by 66.8% and 52.8% at U-M and BIDMC, respectively. Accounting for risk of mortality identified an older population (U-M: age = 66.0 [interquartile range—IQR: 55.0-74.0] vs age = 63.0 [IQR: 51.0-72.0], BIDMC: age = 74.0 [IQR: 61.0-83.0] vs age = 68.0 [IQR: 59.0-78.0]). DiscussionPredictive models that guide selective interventions ignore the effect of disease on downstream outcomes. Reformulating patient stratification to account for the estimated effect of disease on downstream outcomes identifies a different population compared to stratification on disease risk alone. ConclusionModels that predict the risk of disease and ignore the effects of disease on downstream outcomes could be suboptimal for stratification. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2124127
- PAR ID:
- 10579570
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Journal of the American Medical Informatics Association
- Volume:
- 32
- Issue:
- 5
- ISSN:
- 1067-5027
- Format(s):
- Medium: X Size: p. 905-913
- Size(s):
- p. 905-913
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            ABSTRACT Understanding clinical trajectories of sepsis patients is crucial for prognostication, resource planning, and to inform digital twin models of critical illness. This study aims to identify common clinical trajectories based on dynamic assessment of cardiorespiratory support using a validated electronic health record data that covers retrospective cohort of 19,177 patients with sepsis admitted to intensive care units (ICUs) of Mayo Clinic Hospitals over 8-year period. Patient trajectories were modeled from ICU admission up to 14 days using an unsupervised machine learning two-stage clustering method based on cardiorespiratory support in ICU and hospital discharge status. Of 19,177 patients, 42% were female with a median age of 65 (interquartile range [IQR], 55–76) years, The Acute Physiology, Age, and Chronic Health Evaluation III score of 70 (IQR, 56–87), hospital length of stay (LOS) of 7 (IQR, 4–12) days, and ICU LOS of 2 (IQR, 1–4) days. Four distinct trajectories were identified: fast recovery (27% with a mortality rate of 3.5% and median hospital LOS of 3 (IQR, 2–15) days), slow recovery (62% with a mortality rate of 3.6% and hospital LOS of 8 (IQR, 6–13) days), fast decline (4% with a mortality rate of 99.7% and hospital LOS of 1 (IQR, 0–1) day), and delayed decline (7% with a mortality rate of 97.9% and hospital LOS of 5 (IQR, 3–8) days). Distinct trajectories remained robust and were distinguished by Charlson Comorbidity Index, The Acute Physiology, Age, and Chronic Health Evaluation III scores, as well as day 1 and day 3 SOFA (P< 0.001 ANOVA). These findings provide a foundation for developing prediction models and digital twin decision support tools, improving both shared decision making and resource planning.more » « less
- 
            Abstract Background Few interventions are known to reduce the incidence of respiratory failure that occurs following elective surgery (postoperative respiratory failure; PRF). We previously reported risk factors associated with PRF that occurs within the first 5 days after elective surgery (early PRF; E-PRF); however, PRF that occurs six or more days after elective surgery (late PRF; L-PRF) likely represents a different entity. We hypothesized that L-PRF would be associated with worse outcomes and different risk factors than E-PRF. Methods This was a retrospective matched case-control study of 59,073 consecutive adult patients admitted for elective non-cardiac and non-pulmonary surgical procedures at one of five University of California academic medical centers between October 2012 and September 2015. We identified patients with L-PRF, confirmed by surgeon and intensivist subject matter expert review, and matched them 1:1 to patients who did not develop PRF (No-PRF) based on hospital, age, and surgical procedure. We then analyzed risk factors and outcomes associated with L-PRF compared to E-PRF and No-PRF. Results Among 95 patients with L-PRF, 50.5% were female, 71.6% white, 27.4% Hispanic, and 53.7% Medicare recipients; the median age was 63 years (IQR 56, 70). Compared to 95 matched patients with No-PRF and 319 patients who developed E-PRF, L-PRF was associated with higher morbidity and mortality, longer hospital and intensive care unit length of stay, and increased costs. Compared to No-PRF, factors associated with L-PRF included: preexisiting neurologic disease (OR 4.36, 95% CI 1.81–10.46), anesthesia duration per hour (OR 1.22, 95% CI 1.04–1.44), and maximum intraoperative peak inspiratory pressure per cm H 2 0 (OR 1.14, 95% CI 1.06–1.22). Conclusions We identified that pre-existing neurologic disease, longer duration of anesthesia, and greater maximum intraoperative peak inspiratory pressures were associated with respiratory failure that developed six or more days after elective surgery in adult patients (L-PRF). Interventions targeting these factors may be worthy of future evaluation.more » « less
- 
            OBJECTIVES:The optimal approach for resuscitation in septic shock remains unclear despite multiple randomized controlled trials (RCTs). Our objective was to investigate whether previously uncharacterized variation across individuals in their response to resuscitation strategies may contribute to conflicting average treatment effects in prior RCTs. DESIGN:We randomly split study sites from the Australian Resuscitation of Sepsis Evaluation (ARISE) and Protocolized Care for Early Septic Shock (ProCESS) trials into derivation and validation cohorts. We trained machine learning models to predict individual absolute risk differences (iARDs) in 90-day mortality in derivation cohorts and tested for heterogeneity of treatment effect (HTE) in validation cohorts and swapped these cohorts in sensitivity analyses. We fit the best-performing model in a combined dataset to explore roles of patient characteristics and individual components of early goal-directed therapy (EGDT) to determine treatment responses. SETTING:Eighty-one sites in Australia, New Zealand, Hong Kong, Finland, Republic of Ireland, and the United States. PATIENTS:Adult patients presenting to the emergency department with severe sepsis or septic shock. INTERVENTIONS:EGDT vs. usual care. MEASUREMENTS AND MAIN RESULTS:A local-linear random forest model performed best in predicting iARDs. In the validation cohort, HTE was confirmed, evidenced by an interaction between iARD prediction and treatment (p< 0.001). When patients were grouped based on predicted iARDs, treatment response increased from the lowest to the highest quintiles (absolute risk difference [95% CI], –8% [–19% to 4%] and relative risk reduction, 1.34 [0.89–2.01] in quintile 1 suggesting harm from EGDT, and 12% [1–23%] and 0.64 [0.42–0.96] in quintile 5 suggesting benefit). Sensitivity analyses showed similar findings. Pre-intervention albumin contributed the most to HTE. Analyses of individual EGDT components were inconclusive. CONCLUSIONS:Treatment response to EGDT varied across patients in two multicenter RCTs with large benefits for some patients while others were harmed. Patient characteristics, including albumin, were most important in identifying HTE.more » « less
- 
            IntroductionDuring the COVID-19 Delta variant surge, the CLAIRE cross-sectional study sampled saliva from 120 hospitalized patients, 116 of whom had a positive COVID-19 PCR test. Patients received antibiotics upon admission due to possible secondary bacterial infections, with patients at risk of sepsis receiving broad-spectrum antibiotics (BSA). MethodsThe saliva samples were analyzed with shotgun DNA metagenomics and respiratory RNA virome sequencing. Medical records for the period of hospitalization were obtained for all patients. Once hospitalization outcomes were known, patients were classified based on their COVID-19 disease severity and the antibiotics they received. ResultsOur study reveals that BSA regimens differentially impacted the human salivary microbiome and disease progression. 12 patients died and all of them received BSA. Significant associations were found between the composition of the COVID-19 saliva microbiome and BSA use, between SARS-CoV-2 genome coverage and severity of disease. We also found significant associations between the non-bacterial microbiome and severity of disease, withCandida albicansdetected most frequently in critical patients. For patients who did not receive BSA before saliva sampling, our study suggestsStaphylococcus aureusas a potential risk factor for sepsis. DiscussionOur results indicate that the course of the infection may be explained by both monitoring antibiotic treatment and profiling a patient’s salivary microbiome, establishing a compelling link between microbiome and the specific antibiotic type and timing of treatment. This approach can aid with emergency room triage and inpatient management but also requires a better understanding of and access to narrow-spectrum agents that target pathogenic bacteria.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
