skip to main content


Title: Soft calibration for selection bias problems under mixed-effects models
Abstract

Calibration weighting has been widely used to correct selection biases in nonprobability sampling, missing data and causal inference. The main idea is to calibrate the biased sample to the benchmark by adjusting the subject weights. However, hard calibration can produce enormous weights when an exact calibration is enforced on a large set of extraneous covariates. This article proposes a soft calibration scheme, where the outcome and the selection indicator follow mixed-effect models. The scheme imposes an exact calibration on the fixed effects and an approximate calibration on the random effects. On the one hand, our soft calibration has an intrinsic connection with best linear unbiased prediction, which results in a more efficient estimation compared to hard calibration. On the other hand, soft calibration weighting estimation can be envisioned as penalized propensity score weight estimation, with the penalty term motivated by the mixed-effect structure. The asymptotic distribution and a valid variance estimator are derived for soft calibration. We demonstrate the superiority of the proposed estimator over other competitors in simulation studies and using a real-world data application on the effect of BMI screening on childhood obesity.

 
more » « less
Award ID(s):
1931380
NSF-PAR ID:
10491690
Author(s) / Creator(s):
; ;
Publisher / Repository:
Biometrika
Date Published:
Journal Name:
Biometrika
Volume:
110
Issue:
4
ISSN:
0006-3444
Page Range / eLocation ID:
897 to 911
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Propensity score weighting is a tool for causal inference to adjust for measured confounders in observational studies. In practice, data often present complex structures, such as clustering, which make propensity score modeling and estimation challenging. In addition, for clustered data, there may be unmeasured cluster-level covariates that are related to both the treatment assignment and outcome. When such unmeasured cluster-specific confounders exist and are omitted in the propensity score model, the subsequent propensity score adjustment may be biased. In this article, we propose a calibration technique for propensity score estimation under the latent ignorable treatment assignment mechanism, i. e., the treatment-outcome relationship is unconfounded given the observed covariates and the latent cluster-specific confounders. We impose novel balance constraints which imply exact balance of the observed confounders and the unobserved cluster-level confounders between the treatment groups. We show that the proposed calibrated propensity score weighting estimator is doubly robust in that it is consistent for the average treatment effect if either the propensity score model is correctly specified or the outcome follows a linear mixed effects model. Moreover, the proposed weighting method can be combined with sampling weights for an integrated solution to handle confounding and sampling designs for causal inference with clustered survey data. In simulation studies, we show that the proposed estimator is superior to other competitors. We estimate the effect of School Body Mass Index Screening on prevalence of overweight and obesity for elementary schools in Pennsylvania. 
    more » « less
  2. Abstract

    Complementary features of randomized controlled trials (RCTs) and observational studies (OSs) can be used jointly to estimate the average treatment effect of a target population. We propose a calibration weighting estimator that enforces the covariate balance between the RCT and OS, therefore improving the trial-based estimator's generalizability. Exploiting semiparametric efficiency theory, we propose a doubly robust augmented calibration weighting estimator that achieves the efficiency bound derived under the identification assumptions. A nonparametric sieve method is provided as an alternative to the parametric approach, which enables the robust approximation of the nuisance functions and data-adaptive selection of outcome predictors for calibration. We establish asymptotic results and confirm the finite sample performances of the proposed estimators by simulation experiments and an application on the estimation of the treatment effect of adjuvant chemotherapy for early-stage non-small-cell lung patients after surgery.

     
    more » « less
  3. This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.

     
    more » « less
  4. ABSTRACT We describe and test the fiducial covariance matrix model for the combined two-point function analysis of the Dark Energy Survey Year 3 (DES-Y3) data set. Using a variety of new ansatzes for covariance modelling and testing, we validate the assumptions and approximations of this model. These include the assumption of Gaussian likelihood, the trispectrum contribution to the covariance, the impact of evaluating the model at a wrong set of parameters, the impact of masking and survey geometry, deviations from Poissonian shot noise, galaxy weighting schemes, and other sub-dominant effects. We find that our covariance model is robust and that its approximations have little impact on goodness of fit and parameter estimation. The largest impact on best-fitting figure-of-merit arises from the so-called fsky approximation for dealing with finite survey area, which on average increases the χ2 between maximum posterior model and measurement by $3.7{{\ \rm per\ cent}}$ (Δχ2 ≈ 18.9). Standard methods to go beyond this approximation fail for DES-Y3, but we derive an approximate scheme to deal with these features. For parameter estimation, our ignorance of the exact parameters at which to evaluate our covariance model causes the dominant effect. We find that it increases the scatter of maximum posterior values for Ωm and σ8 by about $3{{\ \rm per\ cent}}$ and for the dark energy equation-of-state parameter by about $5{{\ \rm per\ cent}}$. 
    more » « less
  5. Abstract

    Clark et al. (2019) sought to extend the Loreau–Hector partitioning scheme by showing how to estimate selection and complementarity effects from an incomplete sample of species. We demonstrate that their approach suffers from serious conceptual and mathematical errors. Instead of finding unbiased estimators for a finite population, they inserted ad hoc correction factors into unbiased parameter estimators for an infinite population without any mathematical justification in order to force the sample estimators of an infinite population to converge to the true finite population parameter values as sample sizenapproached population sizeN. In doing so, they confused the unbiasedness of a sample estimator with its equivalence to the true population parameter value when.

    Additionally, we show that their estimators of complementarity, selection and the net biodiversity effect are incorrect. We then derive the correct unbiased estimators but caution that, contrary to what Clark et al. claim, these quantities will not approximate the corresponding population parameters without significant repeated random sampling, something that would likely be unfeasible in most if not all biodiversity experiments.

    Clark et al. also state that their method can be used to compare distinct experiments characterized by different species and diversity levels, or extrapolate from biodiversity experiments to natural systems. This is incorrect because relative yields are not a property of individual species like monoculture yields but an emergent and specific feature of an experimental community. As such, two experimental communities, even when overlapping significantly in species, are incommensurable for the purpose of predicting relative yields. In other words, different experimental communities are not equivalent to different samples taken from the same statistical population.

    Finally, Clark et al. incorrectly claim that both the original Loreau–Hector partitioning scheme and their extension work for any baseline despite the fact that recent research has shown that a nonlinear relationship between monoculture density and ecosystem functioning will likely inflate the net biodiversity effect in plant systems, and will always lead to spurious measurements of complementarity and selection.

     
    more » « less