Summary In some randomized clinical trials, patients may die before the measurement time point of their outcomes. Even though randomization generates comparable treatment and control groups, the remaining survivors often differ significantly in background variables that are prognostic to the outcomes. This is called the truncation by death problem. Under the potential outcomes framework, the only well-defined causal effect on the outcome is within the subgroup of patients who would always survive under both treatment and control. Because the definition of the subgroup depends on the potential values of the survival status that could not be observed jointly, without making strong parametric assumptions, we cannot identify the causal effect of interest and consequently can only obtain bounds of it. Unfortunately, however, many bounds are too wide to be useful. We propose to use detailed survival information before and after the measurement time point of the outcomes to sharpen the bounds of the subgroup causal effect. Because survival times contain useful information about the final outcome, carefully utilizing them could improve statistical inference without imposing strong parametric assumptions. Moreover, we propose to use a copula model to relax the commonly-invoked but often doubtful monotonicity assumption that the treatment extends the survival time for all patients.
more »
« less
This content will become publicly available on July 9, 2026
Identifying and bounding the probability of necessity for causes of effects with ordinal outcomes
Abstract Although the existing causal inference literature focuses on the forward-looking perspective by estimating effects of causes, the backward-looking perspective can provide insights into causes of effects. In backward-looking causal inference, the probability of necessity measures the probability that a certain event is caused by the treatment given the observed treatment and outcome. Most existing results focus on binary outcomes. Motivated by applications with ordinal outcomes, we propose a general definition of the probability of necessity. However, identifying the probability of necessity is challenging because it involves the joint distribution of the potential outcomes. We propose a novel assumption of monotonic incremental treatment effect to identify the probability of necessity with ordinal outcomes. We also discuss the testable implications of this key identification assumption. When it fails, we derive explicit formulas of the sharp large-sample bounds on the probability of necessity.
more »
« less
- Award ID(s):
- 1945136
- PAR ID:
- 10625612
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Biometrika
- ISSN:
- 0006-3444
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Structural nested mean models (SNMMs) are useful for causal inference of treatment effects in longitudinal observational studies. Most existing works assume that the data are collected at prefixed time points for all subjects, which, however, may be restrictive in practice. To deal with irregularly spaced observations, we assume a class of continuous‐time SNMMs and a martingale condition of no unmeasured confounding (NUC) to identify the causal parameters. We develop the semiparametric efficiency theory and locally efficient estimators for continuous‐time SNMMs. This task is nontrivial due to the restrictions from the NUC assumption imposed on the SNMM parameter. In the presence of ignorable censoring, we show that the complete‐case estimator is optimal among a class of weighting estimators including the inverse probability of censoring weighting estimator, and it achieves a double robustness feature in that it is consistent if at least one of the models for the potential outcome mean function and the treatment process is correctly specified. The new framework allows us to conduct causal analysis respecting the underlying continuous‐time nature of data processes. The simulation study shows that the proposed estimator outperforms existing approaches. We estimate the effect of time to initiate highly active antiretroviral therapy on the CD4 count at year 2 from the observational Acute Infection and Early Disease Research Program database.more » « less
-
A powerful tool for the analysis of nonrandomized observational studies has been the potential outcomes model. Utilization of this framework allows analysts to estimate average treatment effects. This article considers the situation in which high-dimensional covariates are present and revisits the standard assumptions made in causal inference. We show that by employing a flexible Gaussian process framework, the assumption of strict overlap leads to very restrictive assumptions about the distribution of covariates, results for which can be characterized using classical results from Gaussian random measures as well as reproducing kernel Hilbert space theory. In addition, we propose a strategy for data-adaptive causal effect estimation that does not rely on the strict overlap assumption. These findings reveal under a focused framework the stringency that accompanies the use of the treatment positivity assumption in high-dimensional settings.more » « less
-
Abstract For ordinal outcomes, the average treatment effect is often ill‐defined and hard to interpret. Echoing Agresti and Kateri, we argue that the relative treatment effect can be a useful measure, especially for ordinal outcomes, which is defined as , with and being the potential outcomes of unit under treatment and control, respectively. Given the marginal distributions of the potential outcomes, we derive the sharp bounds on which are identifiable parameters based on the observed data. Agresti and Kateri focused on modeling strategies under the assumption of independent potential outcomes, but we allow for arbitrary dependence.more » « less
-
Summary In many observational studies, the treatment assignment mechanism is not individualistic, as it allows the probability of treatment of a unit to depend on quantities beyond the unit’s covariates. In such settings, unit treatments may be entangled in complex ways. In this article, we consider a particular instance of this problem where the treatments are entangled by a social network among units. For instance, when studying the effects of peer interaction on a social media platform, the treatment on a unit depends on the change of the interactions network over time. A similar situation is encountered in many economic studies, such as those examining the effects of bilateral trade partnerships on countries’ economic growth. The challenge in these settings is that individual treatments depend on a global network that may change in a way that is endogenous and cannot be manipulated experimentally. In this paper, we show that classical propensity score methods that ignore entanglement may lead to large bias and wrong inference of causal effects. We then propose a solution that involves calculating propensity scores by marginalizing over the network change. Under an appropriate ignorability assumption, this leads to unbiased estimates of the treatment effect of interest. We also develop a randomization-based inference procedure that takes entanglement into account. Under general conditions on network change, this procedure can deliver valid inference without explicitly modelling the network. We establish theoretical results for the proposed methods and illustrate their behaviour via simulation studies based on real-world network data. We also revisit a large-scale observational dataset on contagion of online user behaviour, showing that ignoring entanglement may inflate estimates of peer influence.more » « less
An official website of the United States government
