skip to main content


Title: Using Survival Information in Truncation by Death Problems without the Monotonicity Assumption
Summary

In some randomized clinical trials, patients may die before the measurement time point of their outcomes. Even though randomization generates comparable treatment and control groups, the remaining survivors often differ significantly in background variables that are prognostic to the outcomes. This is called the truncation by death problem. Under the potential outcomes framework, the only well-defined causal effect on the outcome is within the subgroup of patients who would always survive under both treatment and control. Because the definition of the subgroup depends on the potential values of the survival status that could not be observed jointly, without making strong parametric assumptions, we cannot identify the causal effect of interest and consequently can only obtain bounds of it. Unfortunately, however, many bounds are too wide to be useful. We propose to use detailed survival information before and after the measurement time point of the outcomes to sharpen the bounds of the subgroup causal effect. Because survival times contain useful information about the final outcome, carefully utilizing them could improve statistical inference without imposing strong parametric assumptions. Moreover, we propose to use a copula model to relax the commonly-invoked but often doubtful monotonicity assumption that the treatment extends the survival time for all patients.

 
more » « less
Award ID(s):
1713152
NSF-PAR ID:
10485974
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Biometrics
Volume:
74
Issue:
4
ISSN:
0006-341X
Format(s):
Medium: X Size: p. 1232-1239
Size(s):
["p. 1232-1239"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    Many clinical studies on non-mortality outcomes such as quality of life suffer from the problem that the non-mortality outcome can be censored by death, i.e. the non-mortality outcome cannot be measured if the subject dies before the time of measurement. To address the problem that this censoring by death is informative, it is of interest to consider the average effect of the treatment on the non-mortality outcome among subjects whose measurement would not be censored under either treatment or control, which is called the survivor average causal effect (SACE). The SACE is not point identified under usual assumptions but bounds can be constructed. The previous literature on bounding the SACE uses only the survival information before the measurement of the non-mortality outcome. However, survival information after the measurement of the non-mortality outcome could also be informative. For randomized trials, we propose a set of ranked average score assumptions that make use of survival information before and after the measurement of the non-mortality outcome which are plausibly satisfied in many studies and we develop a two-step linear programming approach to obtain the closed form for bounds on the SACE under our assumptions. We also extend our method to randomized trials with non-compliance or observational studies with a valid instrumental variable to obtain bounds on the complier SACE which is presented in on-line supplementary material. We apply our method to a randomized trial of the effect of mechanical ventilation with lower tidal volume versus traditional tidal volume for acute lung injury patients. Our bounds on the SACE are much shorter than the bounds that are obtained by using only the survival information before the measurement of the non-mortality outcome.

     
    more » « less
  2. Summary

    Comparative effectiveness research often involves evaluating the differences in the risks of an event of interest between two or more treatments using observational data. Often, the post‐treatment outcome of interest is whether the event happens within a pre‐specified time window, which leads to a binary outcome. One source of bias for estimating the causal treatment effect is the presence of confounders, which are usually controlled using propensity score‐based methods. An additional source of bias is right‐censoring, which occurs when the information on the outcome of interest is not completely available due to dropout, study termination, or treatment switch before the event of interest. We propose an inverse probability weighted regression‐based estimator that can simultaneously handle both confounding and right‐censoring, calling the method CIPWR, with the letter C highlighting the censoring component. CIPWR estimates the average treatment effects by averaging the predicted outcomes obtained from a logistic regression model that is fitted using a weighted score function. The CIPWR estimator has a double robustness property such that estimation consistency can be achieved when either the model for the outcome or the models for both treatment and censoring are correctly specified. We establish the asymptotic properties of the CIPWR estimator for conducting inference, and compare its finite sample performance with that of several alternatives through simulation studies. The methods under comparison are applied to a cohort of prostate cancer patients from an insurance claims database for comparing the adverse effects of four candidate drugs for advanced stage prostate cancer.

     
    more » « less
  3. An immunotherapy trial often uses the phase I/II design to identify the optimal biological dose, which monitors the efficacy and toxicity outcomes simultaneously in a single trial. The progression-free survival rate is often used as the efficacy outcome in phase I/II immunotherapy trials. As a result, patients developing disease progression in phase I/II immunotherapy trials are generally seriously ill and are often treated off the trial for ethical consideration. Consequently, the happening of disease progression will terminate the toxicity event but not vice versa, so the issue of the semi-competing risks arises. Moreover, this issue can become more intractable with the late-onset outcomes, which happens when a relatively long follow-up time is required to ascertain progression-free survival. This paper proposes a novel Bayesian adaptive phase I/II design accounting for semi-competing risks outcomes for immunotherapy trials, referred to as the dose-finding design accounting for semi-competing risks outcomes for immunotherapy trials (SCI) design. To tackle the issue of the semi-competing risks in the presence of late-onset outcomes, we re-construct the likelihood function based on each patient's actual follow-up time and develop a data augmentation method to efficiently draw posterior samples from a series of Beta-binomial distributions. We propose a concise curve-free dose-finding algorithm to adaptively identify the optimal biological dose using accumulated data without making any parametric dose–response assumptions. Numerical studies show that the proposed SCI design yields good operating characteristics in dose selection, patient allocation, and trial duration. 
    more » « less
  4. Abstract

    With advances in biomedical research, biomarkers are becoming increasingly important prognostic factors for predicting overall survival, while the measurement of biomarkers is often censored due to instruments' lower limits of detection. This leads to two types of censoring: random censoring in overall survival outcomes and fixed censoring in biomarker covariates, posing new challenges in statistical modeling and inference. Existing methods for analyzing such data focus primarily on linear regression ignoring censored responses or semiparametric accelerated failure time models with covariates under detection limits (DL). In this paper, we propose a quantile regression for survival data with covariates subject to DL. Comparing to existing methods, the proposed approach provides a more versatile tool for modeling the distribution of survival outcomes by allowing covariate effects to vary across conditional quantiles of the survival time and requiring no parametric distribution assumptions for outcome data. To estimate the quantile process of regression coefficients, we develop a novel multiple imputation approach based on another quantile regression for covariates under DL, avoiding stringent parametric restrictions on censored covariates as often assumed in the literature. Under regularity conditions, we show that the estimation procedure yields uniformly consistent and asymptotically normal estimators. Simulation results demonstrate the satisfactory finite‐sample performance of the method. We also apply our method to the motivating data from a study of genetic and inflammatory markers of Sepsis.

     
    more » « less
  5. For large observational studies lacking a control group (unlike randomized controlled trials, RCT), propensity scores (PS) are often the method of choice to account for pre-treatment confounding in baseline characteristics, and thereby avoid substantial bias in treatment estimation. A vast majority of PS techniques focus on average treatment effect estimation, without any clear consensus on how to account for confounders, especially in a multiple treatment setting. Furthermore, for time-to event outcomes, the analytical framework is further complicated in presence of high censoring rates (sometimes, due to non-susceptibility of study units to a disease), imbalance between treatment groups, and clustered nature of the data (where, survival outcomes appear in groups). Motivated by a right-censored kidney transplantation dataset derived from the United Network of Organ Sharing (UNOS), we investigate and compare two recent promising PS procedures, (a) the generalized boosted model (GBM), and (b) the covariate-balancing propensity score (CBPS), in an attempt to decouple the causal effects of treatments (here, study subgroups, such as hepatitis C virus (HCV) positive/negative donors, and positive/negative recipients) on time to death of kidney recipients due to kidney failure, post transplantation. For estimation, we employ a 2-step procedure which addresses various complexities observed in the UNOS database within a unified paradigm. First, to adjust for the large number of confounders on the multiple sub-groups, we fit multinomial PS models via procedures (a) and (b). In the next stage, the estimated PS is incorporated into the likelihood of a semi-parametric cure rate Cox proportional hazard frailty model via inverse probability of treatment weighting, adjusted for multi-center clustering and excess censoring, Our data analysis reveals a more informative and superior performance of the full model in terms of treatment effect estimation, over sub-models that relaxes the various features of the event time dataset. 
    more » « less