skip to main content


Title: Sensitivity analyses in longitudinal clinical trials via distributional imputation

Missing data is inevitable in longitudinal clinical trials. Conventionally, the missing at random assumption is assumed to handle missingness, which however is unverifiable empirically. Thus, sensitivity analyses are critically important to assess the robustness of the study conclusions against untestable assumptions. Toward this end, regulatory agencies and the pharmaceutical industry use sensitivity models such as return-to-baseline, control-based, and washout imputation, following the ICH E9(R1) guidance. Multiple imputation is popular in sensitivity analyses; however, it may be inefficient and result in an unsatisfying interval estimation by Rubin’s combining rule. We propose distributional imputation in sensitivity analysis, which imputes each missing value by samples from its target imputation model given the observed data. Drawn on the idea of Monte Carlo integration, the distributional imputation estimator solves the mean estimating equations of the imputed dataset. It is fully efficient with theoretical guarantees. Moreover, we propose weighted bootstrap to obtain a consistent variance estimator, taking into account the variabilities due to model parameter estimation and target parameter estimation. The superiority of the distributional imputation framework is validated in the simulation study and an antidepressant longitudinal clinical trial.

 
more » « less
NSF-PAR ID:
10379076
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Statistical Methods in Medical Research
Volume:
32
Issue:
1
ISSN:
0962-2802
Format(s):
Medium: X Size: p. 181-194
Size(s):
["p. 181-194"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Censored survival data are common in clinical trial studies. We propose a unified framework for sensitivity analysis to censoring at random in survival data using multiple imputation and martingale, called SMIM. The proposed framework adopts the δ‐adjusted and control‐based models, indexed by the sensitivity parameter, entailing censoring at random and a wide collection of censoring not at random assumptions. Also, it targets a broad class of treatment effect estimands defined as functionals of treatment‐specific survival functions, taking into account missing data due to censoring. Multiple imputation facilitates the use of simple full‐sample estimation; however, the standard Rubin's combining rule may overestimate the variance for inference in the sensitivity analysis framework. We decompose the multiple imputation estimator into a martingale series based on the sequential construction of the estimator and propose the wild bootstrap inference by resampling the martingale series. The new bootstrap inference has a theoretical guarantee for consistency and is computationally efficient compared to the nonparametric bootstrap counterpart. We evaluate the finite‐sample performance of the proposed SMIM through simulation and an application on an HIV clinical trial.

     
    more » « less
  2. Summary

    Merging multiple datasets collected from studies with identical or similar scientific objectives is often undertaken in practice to increase statistical power. This article concerns the development of an effective statistical method that enables to merge multiple longitudinal datasets subject to various heterogeneous characteristics, such as different follow-up schedules and study-specific missing covariates (e.g., covariates observed in some studies but missing in other studies). The presence of study-specific missing covariates presents great statistical methodology challenge in data merging and analysis. We propose a joint estimating function approach to addressing this challenge, in which a novel nonparametric estimating function constructed via splines-based sieve approximation is utilized to bridge estimating equations from studies with missing covariates to those with fully observed covariates. Under mild regularity conditions, we show that the proposed estimator is consistent and asymptotically normal. We evaluate finite-sample performances of the proposed method through simulation studies. In comparison to the conventional multiple imputation approach, our method exhibits smaller estimation bias. We provide an illustrative data analysis using longitudinal cohorts collected in Mexico City to assess the effect of lead exposures on children's somatic growth.

     
    more » « less
  3. Abstract Motivation

    The human microbiome, which is linked to various diseases by growing evidence, has a profound impact on human health. Since changes in the composition of the microbiome across time are associated with disease and clinical outcomes, microbiome analysis should be performed in a longitudinal study. However, due to limited sample sizes and differing numbers of timepoints for different subjects, a significant amount of data cannot be utilized, directly affecting the quality of analysis results. Deep generative models have been proposed to address this lack of data issue. Specifically, a generative adversarial network (GAN) has been successfully utilized for data augmentation to improve prediction tasks. Recent studies have also shown improved performance of GAN-based models for missing value imputation in a multivariate time series dataset compared with traditional imputation methods.

    Results

    This work proposes DeepMicroGen, a bidirectional recurrent neural network-based GAN model, trained on the temporal relationship between the observations, to impute the missing microbiome samples in longitudinal studies. DeepMicroGen outperforms standard baseline imputation methods, showing the lowest mean absolute error for both simulated and real datasets. Finally, the proposed model improved the predicted clinical outcome for allergies, by providing imputation for an incomplete longitudinal dataset used to train the classifier.

    Availability and implementation

    DeepMicroGen is publicly available at https://github.com/joungmin-choi/DeepMicroGen.

     
    more » « less
  4. Abstract

    Valid surrogate endpoints S can be used as a substitute for a true outcome of interest T to measure treatment efficacy in a clinical trial. We propose a causal inference approach to validate a surrogate by incorporating longitudinal measurements of the true outcomes using a mixed modeling approach, and we define models and quantities for validation that may vary across the study period using principal surrogacy criteria. We consider a surrogate-dependent treatment efficacy curve that allows us to validate the surrogate at different time points. We extend these methods to accommodate a delayed-start treatment design where all patients eventually receive the treatment. Not all parameters are identified in the general setting. We apply a Bayesian approach for estimation and inference, utilizing more informative prior distributions for selected parameters. We consider the sensitivity of these prior assumptions as well as assumptions of independence among certain counterfactual quantities conditional on pretreatment covariates to improve identifiability. We examine the frequentist properties (bias of point and variance estimates, credible interval coverage) of a Bayesian imputation method. Our work is motivated by a clinical trial of a gene therapy where the functional outcomes are measured repeatedly throughout the trial.

     
    more » « less
  5. <italic>Abstract</italic>

    We consider the situation where there is a known regression model that can be used to predict an outcome,Y, from a set of predictor variablesX. A new variableBis expected to enhance the prediction ofY. A dataset of sizencontainingY,XandBis available, and the challenge is to build an improved model forY|X,Bthat uses both the available individual level data and some summary information obtained from the known model forY|X. We propose a synthetic data approach, which consists of creatingmadditional synthetic data observations, and then analyzing the combined dataset of sizen + mto estimate the parameters of theY|X,Bmodel. This combined dataset of sizen + mnow has missing values ofBformof the observations, and is analyzed using methods that can handle missing data (e.g., multiple imputation). We present simulation studies and illustrate the method using data from the Prostate Cancer Prevention Trial. Though the synthetic data method is applicable to a general regression context, to provide some justification, we show in two special cases that the asymptotic variances of the parameter estimates in theY|X,Bmodel are identical to those from an alternative constrained maximum likelihood estimation approach. This correspondence in special cases and the method's broad applicability makes it appealing for use across diverse scenarios.The Canadian Journal of Statistics47: 580–603; 2019 © 2019 Statistical Society of Canada

     
    more » « less