skip to main content


Search for: All records

Award ID contains: 1659935

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    In non-experimental research, a sensitivity analysis helps determine whether a causal conclusion could be easily reversed in the presence of hidden bias. A new approach to sensitivity analysis on the basis of weighting extends and supplements propensity score weighting methods for identifying the average treatment effect for the treated (ATT). In its essence, the discrepancy between a new weight that adjusts for the omitted confounders and an initial weight that omits them captures the role of the confounders. This strategy is appealing for a number of reasons including that, regardless of how complex the data generation functions are, the number of sensitivity parameters remains small and their forms never change. A graphical display of the sensitivity parameter values facilitates a holistic assessment of the dominant potential bias. An application to the well-known LaLonde data lays out the implementation procedure and illustrates its broad utility. The data offer a prototypical example of non-experimental evaluations of the average impact of job training programmes for the participant population.

     
    more » « less
  2. Barnow, Burt S. (Ed.)
  3. null (Ed.)
    In 2003, Chicago Public Schools introduced double-dose algebra, requiring two periods of math—one period of algebra and one of algebra support—for incoming ninth graders with eighth-grade math scores below the national median. Using a regression discontinuity design, earlier studies showed promising results from the program: For median-skill students, double-dose algebra improved algebra test scores, pass rates, high school graduation rates, and college enrollment. This study follows the same students 12 y later. Our findings show that, for median-skill students in the 2003 cohort, double-dose significantly increased semesters of college attended and college degree attainment. These results were not replicated for the 2004 cohort. Importantly, the impact of the policy on median-skill students depended largely on how classes were organized. In 2003, the impacts on college persistence and degree attainment were large in schools that strongly adhered to the cut-score-based course assignment, but without grouping median-skill students with lower-skill peers. Few schools implemented the policy in such a way in 2004. 
    more » « less
  4. null (Ed.)
    Early linguistic input is a powerful predictor of children’s language outcomes. We investigated two novel questions about this relationship: Does the impact of language input vary over time, and does the impact of time-varying language input on child outcomes differ for vocabulary and for syntax? Using methods from epidemiology to account for baseline and time-varying confounding, we predicted 64 children’s outcomes on standardized tests of vocabulary and syntax in kindergarten from their parents’ vocabulary and syntax input when the children were 14 and 30 months old. For vocabulary, children whose parents provided diverse input earlier as well as later in development were predicted to have the highest outcomes. For syntax, children whose parents’ input substantially increased in syntactic complexity over time were predicted to have the highest outcomes. The optimal sequence of parents’ linguistic input for supporting children’s language acquisition thus varies for vocabulary and for syntax. 
    more » « less
  5. null (Ed.)
    Education research has experienced a methodological renaissance over the past two decades, with a new focus on large-scale randomized experiments. This wave of experiments has made education research an even more exciting area for statisticians, unearthing many lessons and challenges in experimental design, causal inference, and statistics more broadly. Importantly, educational research and practice almost always occur in a multilevel setting, which makes the statistics relevant to other fields with this structure, including social policy, health services research, and clinical trials in medicine. In this article we first briefly review the history that led to this new era in education research and describe the design features that dominate the modern large-scale educational experiments. We then highlight some of the key statistical challenges in this area, including endogeneity of design, heterogeneity of treatment effects, noncompliance with treatment assignment, mediation, generalizability, and spillover. Though a secondary focus, we also touch on promising trial designs that answer more nuanced questions, such as the SMART design for studying dynamic treatment regimes and factorial designs for optimizing the components of an existing treatment. 
    more » « less
  6. A common goal in observational research is to estimate marginal causal effects in the presence of confounding variables. One solution to this problem is to use the covariate distribution to weight the outcomes such that the data appear randomized. The propensity score is a natural quantity that arises in this setting. Propensity score weights have desirable asymptotic properties, but they often fail to adequately balance covariate data in finite samples. Empirical covariate balancing methods pose as an appealing alternative by exactly balancing the sample moments of the covariate distribution. With this objective in mind, we propose a framework for estimating balancing weights by solving a constrained convex program, where the criterion function to be optimized is a Bregman distance. We then show that the different distances in this class render identical weights to those of other covariate balancing methods. A series of numerical studies are presented to demonstrate these similarities. 
    more » « less
  7. This study provides a template for multisite causal mediation analysis using a comprehensive weighting-based analytic procedure that enhances external and internal validity. The template incorporates a sample weight to adjust for complex sample and survey designs, adopts an IPTW weight to adjust for differential treatment assignment probabilities, employs an estimated nonresponse weight to account for non-random nonresponse, and utilizes a propensity score-based weighting strategy to flexibly decompose not only the population average but also the between-site heterogeneity of the total program impact. Because the identification assumptions are not always warranted, a weighting-based balance checking procedure assesses the remaining overt bias, while a weighting-based sensitivity analysis further evaluates the potential bias related to omitted confounding or to propensity score model misspecification. We derive the asymptotic variance of the estimators for the causal effects that account for the sampling uncertainty in the estimated weights. The method is applied to a re-analysis of the data from the National Job Corps Study. 
    more » « less
  8. This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score‐based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio‐of‐mediator‐probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score‐based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2‐step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio‐of‐mediator‐probability weighting analysis a solution to the 2‐step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance‐covariance matrix for the indirect effect and direct effect 2‐step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score‐based weighting.

     
    more » « less