skip to main content


Title: Bayesian nonparametric inference for panel count data with an informative observation process
Abstract

In this paper, the panel count data analysis for recurrent events is considered. Such analysis is useful for studying tumor or infection recurrences in both clinical trial and observational studies. A bivariate Gaussian Cox process model is proposed to jointly model the observation process and the recurrent event process. Bayesian nonparametric inference is proposed for simultaneously estimating regression parameters, bivariate frailty effects, and baseline intensity functions. Inference is done through Markov chain Monte Carlo, with fully developed computational techniques. Predictive inference is also discussed under the Bayesian setting. The proposed method is shown to be efficient via simulation studies. A clinical trial dataset on skin cancer patients is analyzed to illustrate the proposed approach.

 
more » « less
NSF-PAR ID:
10053468
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Biometrical Journal
Volume:
60
Issue:
3
ISSN:
0323-3847
Page Range / eLocation ID:
p. 583-596
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Summary

    We propose several Bayesian models for modelling time-to-event data. We consider a piecewise exponential model, a fully parametric cure rate model and a semiparametric cure rate model. For each model, we derive the likelihood function and examine some of its properties for carrying out Bayesian inference with non-informative priors. We also examine model identifiability issues and give conditions which guarantee identifiability. Also, for each model, we construct a class of informative prior distributions based on historical data, i.e. data from similar previous studies. These priors, called power priors, prove to be quite useful in this context. We examine the properties of the power priors for Bayesian inference and, in particular, we study their effect on the current analysis. Tools for model comparison and model assessment are also proposed. A detailed case-study of a recently completed melanoma clinical trial conducted by the Eastern Cooperative Oncology Group is presented and the methodology proposed is demonstrated in detail.

     
    more » « less
  2. Summary

    Model selection for marginal regression analysis of longitudinal data is challenging owing to the presence of correlation and the difficulty of specifying the full likelihood, particularly for correlated categorical data. The paper introduces a novel Bayesian information criterion type model selection procedure based on the quadratic inference function, which does not require the full likelihood or quasi-likelihood. With probability approaching 1, the criterion selects the most parsimonious correct model. Although a working correlation matrix is assumed, there is no need to estimate the nuisance parameters in the working correlation matrix; moreover, the model selection procedure is robust against the misspecification of the working correlation matrix. The criterion proposed can also be used to construct a data-driven Neyman smooth test for checking the goodness of fit of a postulated model. This test is especially useful and often yields much higher power in situations where the classical directional test behaves poorly. The finite sample performance of the model selection and model checking procedures is demonstrated through Monte Carlo studies and analysis of a clinical trial data set.

     
    more » « less
  3. null (Ed.)
    Multi-type recurrent events are often encountered in medical applications when two or more different event types could repeatedly occur over an observation period. For example, patients may experience recurrences of multi-type nonmelanoma skin cancers in a clinical trial for skin cancer prevention. The aims in those applications are to characterize features of the marginal processes, evaluate covariate effects, and quantify both the within-subject recurrence dependence and the dependence among different event types. We use copula-frailty models to analyze correlated recurrent events of different types. Parameter estimation and inference are carried out by using a Monte Carlo expectation-maximization (MCEM) algorithm, which can handle a relatively large (i.e. three or more) number of event types. Performances of the proposed methods are evaluated via extensive simulation studies. The developed methods are used to model the recurrences of skin cancer with different types. 
    more » « less
  4. Abstract

    Censored survival data are common in clinical trial studies. We propose a unified framework for sensitivity analysis to censoring at random in survival data using multiple imputation and martingale, called SMIM. The proposed framework adopts the δ‐adjusted and control‐based models, indexed by the sensitivity parameter, entailing censoring at random and a wide collection of censoring not at random assumptions. Also, it targets a broad class of treatment effect estimands defined as functionals of treatment‐specific survival functions, taking into account missing data due to censoring. Multiple imputation facilitates the use of simple full‐sample estimation; however, the standard Rubin's combining rule may overestimate the variance for inference in the sensitivity analysis framework. We decompose the multiple imputation estimator into a martingale series based on the sequential construction of the estimator and propose the wild bootstrap inference by resampling the martingale series. The new bootstrap inference has a theoretical guarantee for consistency and is computationally efficient compared to the nonparametric bootstrap counterpart. We evaluate the finite‐sample performance of the proposed SMIM through simulation and an application on an HIV clinical trial.

     
    more » « less
  5. Abstract

    In causal inference problems, one is often tasked with estimating causal effects which are analytically intractable functionals of the data‐generating mechanism. Relevant settings include estimating intention‐to‐treat effects in longitudinal problems with missing data or computing direct and indirect effects in mediation analysis. One approach to computing these effects is to use theg‐formula implemented via Monte Carlo integration; when simulation‐based methods such as the nonparametric bootstrap or Markov chain Monte Carlo are used for inference, Monte Carlo integration must be nested within an already computationally intensive algorithm. We develop a widely‐applicable approach to accelerating this Monte Carlo integration step which greatly reduces the computational burden of existingg‐computation algorithms. We refer to our method as acceleratedg‐computation (AGC). The algorithms we present are similar in spirit to multiple imputation, but require removing within‐imputation variance from the standard error rather than adding it. We illustrate the use of AGC on a mediation analysis problem using a beta regression model and in a longitudinal clinical trial subject to nonignorable missingness using a Bayesian additive regression trees model.

     
    more » « less