skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Thursday, March 12 until 2:00 AM ET on Friday, March 13 due to maintenance. We apologize for the inconvenience.


Title: Deep learning unlocks the true potential of organ donation after circulatory death with accurate prediction of time-to-death
Abstract Increasing the number of organ donations after circulatory death (DCD) has been identified as one of the most important ways of addressing the ongoing organ shortage. While recent technological advances in organ transplantation have increased their success rate, a substantial challenge in increasing the number of DCD donations resides in the uncertainty regarding the timing of cardiac death after terminal extubation, impacting the risk of prolonged ischemic organ injury, and negatively affecting post-transplant outcomes. In this study, we trained and externally validated an ODE-RNN model, which combines recurrent neural network with neural ordinary equations and excels in processing irregularly-sampled time series data. The model is designed to predict time-to-death following terminal extubation in the intensive care unit (ICU) using the history of clinical observations. Our model was trained on a cohort of 3,238 patients from Yale New Haven Hospital, and validated on an external cohort of 1,908 patients from six hospitals across Connecticut. The model achieved accuracies of$$95.3~\pm ~1.0\%$$and$$95.4~\pm ~0.7\%$$for predicting whether death would occur in the first 30 and 60 minutes, respectively, with a calibration error of$$0.024~\pm ~0.009$$. Heart rate, respiratory rate, mean arterial blood pressure (MAP), oxygen saturation (SpO2), and Glasgow Coma Scale (GCS) scores were identified as the most important predictors. Surpassing existing clinical scores, our model sets the stage for reduced organ acquisition costs and improved post-transplant outcomes.  more » « less
Award ID(s):
2047856
PAR ID:
10617982
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Springer Nature
Date Published:
Journal Name:
Scientific Reports
Volume:
15
Issue:
1
ISSN:
2045-2322
Subject(s) / Keyword(s):
Machine learning, AI, Time-to-death prediction
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. BACKGROUND: Lung transplantation is the gold standard for a carefully selected patient population with end-stage lung disease. We sought to create a unique risk stratification model using only preoperative recipient data to predict one-year postoperative mortality during our pre-transplant assessment. METHODS: Data of lung transplant recipients at Houston Methodist Hospital (HMH) from 1/2009 to 12/2014 were extracted from the United Network for Organ Sharing (UNOS) database. Patients were randomly divided into development and validation cohorts. Cox proportional-hazards models were conducted. Variables associated with 1-year mortality post-transplant were assigned weights based on the beta coefficients, and risk scores were derived. Patients were stratified into low-, medium- and high-risk categories. Our model was validated using the validation dataset and data from other US transplant centers in the UNOS database RESULTS: We randomized 633 lung recipients from HMH into the development (n=317 patients) and validation cohort (n=316). One-year survival after transplant was significantly different among risk groups: 95% (low-risk), 84% (medium-risk), and 72% (high-risk) (p<0.001) with a C-statistic of 0.74. Patient survival in the validation cohort was also significantly different among risk groups (85%, 77% and 65%, respectively, p<0.001). Validation of the model with the UNOS dataset included 9,920 patients and found 1-year survival to be 91%, 86% and 82%, respectively (p < 0.001). CONCLUSIONS: Using only recipient data collected at the time of pre-listing evaluation, our simple scoring system has good discrimination power and can be a practical tool in the assessment and selection of potential lung transplant recipients. 
    more » « less
  2. A<sc>bstract</sc> We report measurements of the absolute branching fractions$$\mathcal{B}\left({B}_{s}^{0}\to {D}_{s}^{\pm }X\right)$$,$$\mathcal{B}\left({B}_{s}^{0}\to {D}^{0}/{\overline{D} }^{0}X\right)$$, and$$\mathcal{B}\left({B}_{s}^{0}\to {D}^{\pm }X\right)$$, where the latter is measured for the first time. The results are based on a 121.4 fb−1data sample collected at the Υ(10860) resonance by the Belle detector at the KEKB asymmetric-energye+ecollider. We reconstruct one$${B}_{s}^{0}$$meson in$${e}^{+}{e}^{-}\to \Upsilon\left(10860\right)\to {B}_{s}^{*}{\overline{B} }_{s}^{*}$$events and measure yields of$${D}_{s}^{+}$$,D0, andD+mesons in the rest of the event. We obtain$$\mathcal{B}\left({B}_{s}^{0}\to {D}_{s}^{\pm }X\right)=\left(68.6\pm 7.2\pm 4.0\right)\%$$,$$\mathcal{B}\left({B}_{s}^{0}\to {D}^{0}/{\overline{D} }^{0}X\right)=\left(21.5\pm 6.1\pm 1.8\right)\%$$, and$$\mathcal{B}\left({B}_{s}^{0}\to {D}^{\pm }X\right)=\left(12.6\pm 4.6\pm 1.3\right)\%$$, where the first uncertainty is statistical and the second is systematic. Averaging with previous Belle measurements gives$$\mathcal{B}\left({B}_{s}^{0}\to {D}_{s}^{\pm }X\right)=\left(63.4\pm 4.5\pm 2.2\right)\%$$and$$\mathcal{B}\left({B}_{s}^{0}\to {D}^{0}/{\overline{D} }^{0}X\right)=\left(23.9\pm 4.1\pm 1.8\right)\%$$. For the$${B}_{s}^{0}$$production fraction at the Υ(10860), we find$${f}_{s}=\left({21.4}_{-1.7}^{+1.5}\right)\%$$. 
    more » « less
  3. Abstract This paper presents a search for dark matter,$$\chi $$ χ , using events with a single top quark and an energeticWboson. The analysis is based on proton–proton collision data collected with the ATLAS experiment at$$\sqrt{s}=$$ s = 13 TeV during LHC Run 2 (2015–2018), corresponding to an integrated luminosity of 139 fb$$^{-1}$$ - 1 . The search considers final states with zero or one charged lepton (electron or muon), at least oneb-jet and large missing transverse momentum. In addition, a result from a previous search considering two-charged-lepton final states is included in the interpretation of the results. The data are found to be in good agreement with the Standard Model predictions and the results are interpreted in terms of 95% confidence-level exclusion limits in the context of a class of dark matter models involving an extended two-Higgs-doublet sector together with a pseudoscalar mediator particle. The search is particularly sensitive to on-shell production of the charged Higgs boson state,$$H^{\pm }$$ H ± , arising from the two-Higgs-doublet mixing, and its semi-invisible decays via the mediator particle,a:$$H^{\pm } \rightarrow W^\pm a (\rightarrow \chi \chi )$$ H ± W ± a ( χ χ ) . Signal models with$$H^{\pm }$$ H ± masses up to 1.5 TeV andamasses up to 350 GeV are excluded assuming a$$\tan \beta $$ tan β value of 1. For masses ofaof 150 (250) GeV,$$\tan \beta $$ tan β values up to 2 are excluded for$$H^{\pm }$$ H ± masses between 200 (400) GeV and 1.5 TeV. Signals with$$\tan \beta $$ tan β values between 20 and 30 are excluded for$$H^{\pm }$$ H ± masses between 500 and 800 GeV. 
    more » « less
  4. Abstract Measurements of Higgs boson production, where the Higgs boson decays into a pair of$$\uptau $$ τ leptons, are presented, using a sample of proton-proton collisions collected with the CMS experiment at a center-of-mass energy of Equation missing<#comment/>, corresponding to an integrated luminosity of 138$$\,\text {fb}^{-1}$$ fb - 1 . Three analyses are presented. Two are targeting Higgs boson production via gluon fusion and vector boson fusion: a neural network based analysis and an analysis based on an event categorization optimized on the ratio of signal over background events. These are complemented by an analysis targeting vector boson associated Higgs boson production. Results are presented in the form of signal strengths relative to the standard model predictions and products of cross sections and branching fraction to$$\uptau $$ τ leptons, in up to 16 different kinematic regions. For the simultaneous measurements of the neural network based analysis and the analysis targeting vector boson associated Higgs boson production signal strengths are found to be$$0.82\pm 0.11$$ 0.82 ± 0.11 for inclusive Higgs boson production,$$0.67\pm 0.19$$ 0.67 ± 0.19 ($$0.81\pm 0.17$$ 0.81 ± 0.17 ) for the production mainly via gluon fusion (vector boson fusion), and$$1.79\pm 0.45$$ 1.79 ± 0.45 for vector boson associated Higgs boson production. 
    more » « less
  5. null (Ed.)
    The disparity in the impact of COVID-19 on minority populations in the United States has been well established in the available data on deaths, case counts, and adverse outcomes. However, critical metrics used by public health officials and epidemiologists, such as a time dependent viral reproductive number (\begin{document}$$ R_t $$\end{document}), can be hard to calculate from this data especially for individual populations. Furthermore, disparities in the availability of testing, record keeping infrastructure, or government funding in disadvantaged populations can produce incomplete data sets. In this work, we apply ensemble data assimilation techniques which optimally combine model and data to produce a more complete data set providing better estimates of the critical metrics used by public health officials and epidemiologists. We employ a multi-population SEIR (Susceptible, Exposed, Infected and Recovered) model with a time dependent reproductive number and age stratified contact rate matrix for each population. We assimilate the daily death data for populations separated by ethnic/racial groupings using a technique called Ensemble Smoothing with Multiple Data Assimilation (ESMDA) to estimate model parameters and produce an \begin{document}$$R_t(n)$$\end{document} for the \begin{document}$$n^{th}$$\end{document} population. We do this with three distinct approaches, (1) using the same contact matrices and prior \begin{document}$$R_t(n)$$\end{document} for each population, (2) assigning contact matrices with increased contact rates for working age and older adults to populations experiencing disparity and (3) as in (2) but with a time-continuous update to \begin{document}$$R_t(n)$$\end{document}. We make a study of 9 U.S. states and the District of Columbia providing a complete time series of the pandemic in each and, in some cases, identifying disparities not otherwise evident in the aggregate statistics. 
    more » « less