skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Semi-parametric sensitivity analysis for trials with irregular and informative assessment times
ABSTRACT Many trials are designed to collect outcomes at or around pre-specified times after randomization. If there is variability in the times when participants are actually assessed, this can pose a challenge to learning the effect of treatment, since not all participants have outcome assessments at the times of interest. Furthermore, observed outcome values may not be representative of all participants’ outcomes at a given time. Methods have been developed that account for some types of such irregular and informative assessment times; however, since these methods rely on untestable assumptions, sensitivity analyses are needed. We develop a sensitivity analysis methodology that is benchmarked at the explainable assessment (EA) assumption, under which assessment and outcomes at each time are related only through data collected prior to that time. Our method uses an exponential tilting assumption, governed by a sensitivity analysis parameter, that posits deviations from the EA assumption. Our inferential strategy is based on a new influence function-based, augmented inverse intensity-weighted estimator. Our approach allows for flexible semiparametric modeling of the observed data, which is separated from specification of the sensitivity parameter. We apply our method to a randomized trial of low-income individuals with uncontrolled asthma, and we illustrate implementation of our estimation procedure in detail.  more » « less
Award ID(s):
2242776
PAR ID:
10596419
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Biometrics
Date Published:
Journal Name:
Biometrics
Volume:
80
Issue:
4
ISSN:
0006-341X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary Covariate adjustment can improve precision in analysing randomized experiments. With fully observed data, regression adjustment and propensity score weighting are asymptotically equivalent in improving efficiency over unadjusted analysis. When some outcomes are missing, we consider combining these two adjustment methods with the inverse probability of observation weighting for handling missing outcomes, and show that the equivalence between the two methods breaks down. Regression adjustment no longer ensures efficiency gain over unadjusted analysis unless the true outcome model is linear in covariates or the outcomes are missing completely at random. Propensity score weighting, in contrast, still guarantees efficiency over unadjusted analysis, and including more covariates in adjustment never harms asymptotic efficiency. Moreover, we establish the value of using partially observed covariates to secure additional efficiency by the missingness indicator method, which imputes all missing covariates by zero and uses the union of the completed covariates and corresponding missingness indicators as the new, fully observed covariates. Based on these findings, we recommend using regression adjustment in combination with the missingness indicator method if the linear outcome model or missing-completely-at-random assumption is plausible and using propensity score weighting with the missingness indicator method otherwise. 
    more » « less
  2. Summary Functional principal component analysis has been shown to be invaluable for revealing variation modes of longitudinal outcomes, which serve as important building blocks for forecasting and model building. Decades of research have advanced methods for functional principal component analysis, often assuming independence between the observation times and longitudinal outcomes. Yet such assumptions are fragile in real-world settings where observation times may be driven by outcome-related processes. Rather than ignoring the informative observation time process, we explicitly model the observational times by a general counting process dependent on time-varying prognostic factors. Identification of the mean, covariance function and functional principal components ensues via inverse intensity weighting. We propose using weighted penalized splines for estimation and establish consistency and convergence rates for the weighted estimators. Simulation studies demonstrate that the proposed estimators are substantially more accurate than the existing ones in the presence of a correlation between the observation time process and the longitudinal outcome process. We further examine the finite-sample performance of the proposed method using the Acute Infection and Early Disease Research Program study. 
    more » « less
  3. Abstract ObjectivesEpileptiform activity (EA) worsens outcomes in patients with acute brain injuries (e.g., aneurysmal subarachnoid hemorrhage [aSAH]). Randomized trials (RCTs) assessing anti-seizure interventions are needed. Due to scant drug efficacy data and ethical reservations with placebo utilization, RCTs are lacking or hindered by design constraints. We used a pharmacological model-guided simulator to design and determine feasibility of RCTs evaluating EA treatment. MethodsIn a single-center cohort of adults (age >18) with aSAH and EA, we employed a mechanistic pharmacokinetic-pharmacodynamic framework to model treatment response using observational data. We subsequently simulated RCTs for levetiracetam and propofol, each with three treatment arms mirroring clinical practice and an additional placebo arm. Using our framework we simulated EA trajectories across treatment arms. We predicted discharge modified Rankin Scale as a function of baseline covariates, EA burden, and drug doses using a double machine learning model learned from observational data. Differences in outcomes across arms were used to estimate the required sample size. ResultsSample sizes ranged from 500 for levetiracetam 7 mg/kg vs placebo, to >4000 for levetiracetam 15 vs. 7 mg/kg to achieve 80% power (5% type I error). For propofol 1mg/kg/hr vs. placebo 1200 participants were needed. Simulations comparing propofol at varying doses did not reach 80% power even at samples >1200. InterpretationOur simulations using drug efficacy show sample sizes are infeasible, even for potentially unethical placebo-control trials. We highlight the strength of simulations with observational data to inform the null hypotheses and assess feasibility of future trials of EA treatment. 
    more » « less
  4. ABSTRACT Disparities in health or well‐being experienced by minority groups can be difficult to study using the traditional exposure‐outcome paradigm in causal inference, since potential outcomes in variables such as race or sexual minority status are challenging to interpret. Causal decomposition analysis addresses this gap by positing causal effects on disparities under interventions to other intervenable exposures that may play a mediating role in the disparity. While invoking weaker assumptions than causal mediation approaches, decomposition analyses are often conducted in observational settings and require uncheckable assumptions that eliminate unmeasured confounders. Leveraging the marginal sensitivity model, we develop a sensitivity analysis for weighted causal decomposition estimators and use the percentile bootstrap to construct valid confidence intervals for causal effects on disparities. We also propose a two‐parameter reformulation that enhances interpretability and facilitates an intuitive understanding of the plausibility of unmeasured confounders and their effects. We illustrate our framework on a study examining the effect of parental support on disparities in suicidal ideation among sexual minority youth. We find that the effect is small and sensitive to unmeasured confounding, suggesting that further screening studies are needed to identify mitigating interventions in this vulnerable population. 
    more » « less
  5. The goal of causal mediation analysis, often described within the potential outcomes framework, is to decompose the effect of an exposure on an outcome of interest along different causal pathways. Using the assumption of sequential ignorability to attain non-parametric identification, Imai et al. (2010) proposed a flexible approach to measuring mediation effects, focusing on parametric and semiparametric normal/Bernoulli models for the outcome and mediator. Less attention has been paid to the case where the outcome and/or mediator model are mixed-scale, ordinal, or otherwise fall outside the normal/Bernoulli setting. We develop a simple, but flexible, parametric modeling framework to accommodate the common situation where the responses are mixed continuous and binary, and, apply it to a zero-one inflated beta model for the outcome and mediator. Applying our proposed methods to the publicly-available JOBS II dataset, we (i) argue for the need for non-normal models, (ii) show how to estimate both average and quantile mediation effects for boundary-censored data, and (iii) show how to conduct a meaningful sensitivity analysis by introducing unidentified, scientifically meaningful, sensitivity parameters. 
    more » « less