Abstract BackgroundWe hypothesized that alemtuzumab use is safe in pediatric kidney transplant recipients (KTRs) with equivalent long‐term outcomes compared to other induction agents. MethodsUsing pediatric kidney transplant recipient data in the UNOS database between January 1, 2000, and June 30, 2022, multivariate logistic regression, multivariable Cox regression, and survival analyses were utilized to estimate the likelihoods of 1st‐year and all‐time hospitalizations, acute rejection, CMV infection, delayed graft function (DGF), graft loss, and patient mortality among recipients of three common induction regimens (ATG, alemtuzumab, and basiliximab). ResultsThere were no differences in acute rejection or graft failure among induction or maintenance regimens. Basiliximab was associated with lower odds of DGF in deceased donor recipients (OR 0.77 [0.60–0.99],p = .04). Mortality was increased in patients treated with steroid‐containing maintenance (HR 1.3 [1.005–1.7]p = .045). Alemtuzumab induction correlated with less risk of CMV infection than ATG (OR 0.76 [0.59–0.99],p = .039). Steroid‐containing maintenance conferred lower rate of PTLD compared to steroid‐free maintenance (HR 0.59 [0.4–0.8]p = .001). Alemtuzumab was associated with less risk of hospitalization within 1 year (OR 0.79 [0.67–0.95]p = .012) and 5 years (HR 0.54 [0.46–0.65]p < .001) of transplantation. Steroid maintenance also decreased 5 years hospitalization risk (HR 0.78 [0.69–0.89]p < .001). ConclusionsPediatric KTRs may be safely treated with alemtuzumab induction without increased risk of acute rejection, DGF, graft loss, or patient mortality. The decreased risk of CMV infections and lower hospitalization rates compared to other agents make alemtuzumab an attractive choice for induction in pediatric KTRs, especially in those who cannot tolerate ATG.
more »
« less
The Houston Methodist lung transplant risk model – a validated tool for pre-transplant risk assessment
BACKGROUND: Lung transplantation is the gold standard for a carefully selected patient population with end-stage lung disease. We sought to create a unique risk stratification model using only preoperative recipient data to predict one-year postoperative mortality during our pre-transplant assessment. METHODS: Data of lung transplant recipients at Houston Methodist Hospital (HMH) from 1/2009 to 12/2014 were extracted from the United Network for Organ Sharing (UNOS) database. Patients were randomly divided into development and validation cohorts. Cox proportional-hazards models were conducted. Variables associated with 1-year mortality post-transplant were assigned weights based on the beta coefficients, and risk scores were derived. Patients were stratified into low-, medium- and high-risk categories. Our model was validated using the validation dataset and data from other US transplant centers in the UNOS database RESULTS: We randomized 633 lung recipients from HMH into the development (n=317 patients) and validation cohort (n=316). One-year survival after transplant was significantly different among risk groups: 95% (low-risk), 84% (medium-risk), and 72% (high-risk) (p<0.001) with a C-statistic of 0.74. Patient survival in the validation cohort was also significantly different among risk groups (85%, 77% and 65%, respectively, p<0.001). Validation of the model with the UNOS dataset included 9,920 patients and found 1-year survival to be 91%, 86% and 82%, respectively (p < 0.001). CONCLUSIONS: Using only recipient data collected at the time of pre-listing evaluation, our simple scoring system has good discrimination power and can be a practical tool in the assessment and selection of potential lung transplant recipients.
more »
« less
- Award ID(s):
- 1826144
- PAR ID:
- 10110737
- Date Published:
- Journal Name:
- The annals of thoracic surgery
- ISSN:
- 0003-4975
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Keim-Malpass, Jessica (Ed.)During the early stages of hospital admission, clinicians use limited information to make decisions as patient acuity evolves. We hypothesized that clustering analysis of vital signs measured within six hours of hospital admission would reveal distinct patient phenotypes with unique pathophysiological signatures and clinical outcomes. We created a longitudinal electronic health record dataset for 75,762 adult patient admissions to a tertiary care center in 2014–2016 lasting six hours or longer. Physiotypes were derived via unsupervised machine learning in a training cohort of 41,502 patients applying consensus k -means clustering to six vital signs measured within six hours of admission. Reproducibility and correlation with clinical biomarkers and outcomes were assessed in validation cohort of 17,415 patients and testing cohort of 16,845 patients. Training, validation, and testing cohorts had similar age (54–55 years) and sex (55% female), distributions. There were four distinct clusters. Physiotype A had physiologic signals consistent with early vasoplegia, hypothermia, and low-grade inflammation and favorable short-and long-term clinical outcomes despite early, severe illness. Physiotype B exhibited early tachycardia, tachypnea, and hypoxemia followed by the highest incidence of prolonged respiratory insufficiency, sepsis, acute kidney injury, and short- and long-term mortality. Physiotype C had minimal early physiological derangement and favorable clinical outcomes. Physiotype D had the greatest prevalence of chronic cardiovascular and kidney disease, presented with severely elevated blood pressure, and had good short-term outcomes but suffered increased 3-year mortality. Comparing sequential organ failure assessment (SOFA) scores across physiotypes demonstrated that clustering did not simply recapitulate previously established acuity assessments. In a heterogeneous cohort of hospitalized patients, unsupervised machine learning techniques applied to routine, early vital sign data identified physiotypes with unique disease categories and distinct clinical outcomes. This approach has the potential to augment understanding of pathophysiology by distilling thousands of disease states into a few physiological signatures.more » « less
-
Abstract IntroductionIgA nephropathy (IgAN) can cause end‐stage kidney disease (ESKD). This study assesses the impact of induction and maintenance immunosuppression on IgAN recurrence, graft survival, and mortality in living and deceased donor kidney transplants (LDKT and DDKT). MethodsRetrospective analysis of the UNOS database in adults with ESKD secondary to IgAN who received kidney transplants between January 2000 and June 30, 2022. Patients with thymoglobulin (ATG), alemtuzumab, or basiliximab/daclizumab induction with calcineurin inhibitor (CNI) and mycophenolate mofetil (MMF) with or without prednisone maintenance were analyzed. Multivariate logistic regression was performed to identify factors correlated with IgA recurrence. Multivariable Cox regression analyses were performed for clinically suspected risk factors. Kaplan Meir Analysis was utilized for overall graft survival. ResultsCompared to ATG with steroid maintenance, alemtuzumab with steroid increased the odds of IgAN recurrence in DDKTs (OR 1.90,p <.010, 95% CI [1.169–3.101]). Alemtuzumab with and without steroid increased the odds of recurrence by 52% (p = .036) and 56% (p = .005), respectively, in LDKTs. ATG without steroids was associated with less risk of IgAN recurrence (HR .665,p = .044, 95% CI [.447–.989]), graft failure (HR .758,p = .002, 95% CI [.633–.907]), and death (HR .619,p <.001, 95% CI [.490–.783]) in DDKTs. Recurrence was strongly associated with risks of graft failure in DDKTs and LDKTs and death in LDKTs. ConclusionIn patients with IgAN requiring a kidney transplant, Alemtuzumab induction correlates with increased IgAN recurrence. Relapse significantly affects graft survival and mortality. ATG without steroids is associated with the least graft loss and mortality.more » « less
-
OBJECTIVES:The optimal approach for resuscitation in septic shock remains unclear despite multiple randomized controlled trials (RCTs). Our objective was to investigate whether previously uncharacterized variation across individuals in their response to resuscitation strategies may contribute to conflicting average treatment effects in prior RCTs. DESIGN:We randomly split study sites from the Australian Resuscitation of Sepsis Evaluation (ARISE) and Protocolized Care for Early Septic Shock (ProCESS) trials into derivation and validation cohorts. We trained machine learning models to predict individual absolute risk differences (iARDs) in 90-day mortality in derivation cohorts and tested for heterogeneity of treatment effect (HTE) in validation cohorts and swapped these cohorts in sensitivity analyses. We fit the best-performing model in a combined dataset to explore roles of patient characteristics and individual components of early goal-directed therapy (EGDT) to determine treatment responses. SETTING:Eighty-one sites in Australia, New Zealand, Hong Kong, Finland, Republic of Ireland, and the United States. PATIENTS:Adult patients presenting to the emergency department with severe sepsis or septic shock. INTERVENTIONS:EGDT vs. usual care. MEASUREMENTS AND MAIN RESULTS:A local-linear random forest model performed best in predicting iARDs. In the validation cohort, HTE was confirmed, evidenced by an interaction between iARD prediction and treatment (p< 0.001). When patients were grouped based on predicted iARDs, treatment response increased from the lowest to the highest quintiles (absolute risk difference [95% CI], –8% [–19% to 4%] and relative risk reduction, 1.34 [0.89–2.01] in quintile 1 suggesting harm from EGDT, and 12% [1–23%] and 0.64 [0.42–0.96] in quintile 5 suggesting benefit). Sensitivity analyses showed similar findings. Pre-intervention albumin contributed the most to HTE. Analyses of individual EGDT components were inconclusive. CONCLUSIONS:Treatment response to EGDT varied across patients in two multicenter RCTs with large benefits for some patients while others were harmed. Patient characteristics, including albumin, were most important in identifying HTE.more » « less
-
BACKGROUND:Classification of perioperative risk is important for patient care, resource allocation, and guiding shared decision-making. Using discriminative features from the electronic health record (EHR), machine-learning algorithms can create digital phenotypes among heterogenous populations, representing distinct patient subpopulations grouped by shared characteristics, from which we can personalize care, anticipate clinical care trajectories, and explore therapies. We hypothesized that digital phenotypes in preoperative settings are associated with postoperative adverse events including in-hospital and 30-day mortality, 30-day surgical redo, intensive care unit (ICU) admission, and hospital length of stay (LOS). METHODS:We identified all laminectomies, colectomies, and thoracic surgeries performed over a 9-year period from a large hospital system. Seventy-seven readily extractable preoperative features were first selected from clinical consensus, including demographics, medical history, and lab results. Three surgery-specific datasets were built and split into derivation and validation cohorts using chronological occurrence. Consensusk-means clustering was performed independently on each derivation cohort, from which phenotypes’ characteristics were explored. Cluster assignments were used to train a random forest model to assign patient phenotypes in validation cohorts. We reconducted descriptive analyses on validation cohorts to confirm the similarity of patient characteristics with derivation cohorts, and quantified the association of each phenotype with postoperative adverse events by using the area under receiver operating characteristic curve (AUROC). We compared our approach to American Society of Anesthesiologists (ASA) alone and investigated a combination of our phenotypes with the ASA score. RESULTS:A total of 7251 patients met inclusion criteria, of which 2770 were held out in a validation dataset based on chronological occurrence. Using segmentation metrics and clinical consensus, 3 distinct phenotypes were created for each surgery. The main features used for segmentation included urgency of the procedure, preoperative LOS, age, and comorbidities. The most relevant characteristics varied for each of the 3 surgeries. Low-risk phenotype alpha was the most common (2039 of 2770, 74%), while high-risk phenotype gamma was the rarest (302 of 2770, 11%). Adverse outcomes progressively increased from phenotypes alpha to gamma, including 30-day mortality (0.3%, 2.1%, and 6.0%, respectively), in-hospital mortality (0.2%, 2.3%, and 7.3%), and prolonged hospital LOS (3.4%, 22.1%, and 25.8%). When combined with the ASA score, digital phenotypes achieved higher AUROC than the ASA score alone (hospital mortality: 0.91 vs 0.84; prolonged hospitalization: 0.80 vs 0.71). CONCLUSIONS:For 3 frequently performed surgeries, we identified 3 digital phenotypes. The typical profiles of each phenotype were described and could be used to anticipate adverse postoperative events.more » « less
An official website of the United States government

