skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Variance-based sensitivity analysis for weighting estimators results in more informative bounds
Abstract Weighting methods are popular tools for estimating causal effects, and assessing their robustness under unobserved confounding is important in practice. Current approaches to sensitivity analyses rely on bounding a worst-case error from omitting a confounder. In this paper, we introduce a new sensitivity model called the variance-based sensitivity model, which instead bounds the distributional differences that arise in the weights from omitting a confounder. The variance-based sensitivity model can be parameterized by an R2 parameter that is both standardized and bounded. We demonstrate, both empirically and theoretically, that the variance-based sensitivity model provides improvements on the stability of the sensitivity analysis procedure over existing methods. We show that by moving away from worst-case bounds, we are able to obtain more interpretable and informative bounds. We illustrate our proposed approach on a study examining blood mercury levels using the National Health and Nutrition Examination Survey.  more » « less
Award ID(s):
2142146
PAR ID:
10609214
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Biometrika
Volume:
112
Issue:
1
ISSN:
1464-3510
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper considers the recently popular beyond-worst-case algorithm analysis model which integrates machine-learned predictions with online algorithm design. We consider the online Steiner tree problem in this model for both directed and undirected graphs. Steiner tree is known to have strong lower bounds in the online setting and any algorithm’s worst-case guarantee is far from desirable. This paper considers algorithms that predict which terminal arrives online. The predictions may be incorrect and the algorithms’ performance is parameterized by the number of incorrectly predicted terminals. These guarantees ensure that algorithms break through the online lower bounds with good predictions and the competitive ratio gracefully degrades as the prediction error grows. We then observe that the theory is predictive of what will occur empirically. We show on graphs where terminals are drawn from a distribution, the new online algorithms have strong performance even with modestly correct predictions. 
    more » « less
  2. In this paper, we prove that Distributional Re- inforcement Learning (DistRL), which learns the return distribution, can obtain second-order bounds in both online and offline RL in general settings with function approximation. Second- order bounds are instance-dependent bounds that scale with the variance of return, which we prove are tighter than the previously known small-loss bounds of distributional RL. To the best of our knowledge, our results are the first second-order bounds for low-rank MDPs and for offline RL. When specializing to contextual bandits (one-step RL problem), we show that a distributional learn- ing based optimism algorithm achieves a second- order worst-case regret bound, and a second-order gap dependent bound, simultaneously. We also empirically demonstrate the benefit of DistRL in contextual bandits on real-world datasets. We highlight that our analysis with DistRL is rela- tively simple, follows the general framework of optimism in the face of uncertainty and does not require weighted regression. Our results suggest that DistRL is a promising framework for obtain- ing second-order bounds in general RL settings, thus further reinforcing the benefits of DistRL. 
    more » « less
  3. In this paper, we prove that Distributional Reinforcement Learning (DistRL), which learns the return distribution, can obtain second-order bounds in both online and offline RL in general settings with function approximation. Second-order bounds are instance-dependent bounds that scale with the variance of return, which we prove are tighter than the previously known small-loss bounds of distributional RL. To the best of our knowledge, our results are the first second-order bounds for low-rank MDPs and for offline RL. When specializing to contextual bandits (one-step RL problem), we show that a distributional learning based optimism algorithm achieves a second-order worst-case regret bound, and a second-order gap dependent bound, simultaneously. We also empirically demonstrate the benefit of DistRL in contextual bandits on real-world datasets. We highlight that our analysis with DistRL is relatively simple, follows the general framework of optimism in the face of uncertainty and does not require weighted regression. Our results suggest that DistRL is a promising framework for obtaining second-order bounds in general RL settings, thus further reinforcing the benefits of DistRL. 
    more » « less
  4. The problem of selecting "effective preemption points" in a program --- points in the code at which to permit preemption --- in order to minimize overall running time is considered. Prior solutions that have been proposed for this problem are based on workload models in which worst-case known upper bounds are assumed for the duration needed to perform preemptions at particular points in the code, and of the time needed to non-preemptively execute the code between preemption points. Since these solutions are based on worst-case assumptions, they tend to select effective preemption points in a conservative manner; consequently the overall execution time of the program may be needlessly large under most typical run-time circumstances. We consider a more general workload model in which "typical" values, as well as upper bounds, are assumed to be known for the preemption durations and the non-preemptive code-execution durations; given such information, we derive algorithms for the optimal placement of preemption points in a manner that minimizes the typical overall running time (while continuing to guarantee, if needed, upper bounds on the worst-case over-all running time). Both off-line solutions (in which all preemption points are selected prior to run-time) and on-line solutions (where the selection of some of the preemption points is made during run-time and therefore can exploit knowledge of the actual durations of prior preemptions and of the executions of already executed pieces of code) are presented and proved optimal. 
    more » « less
  5. Summary Scenario‐based model predictive control (MPC) methods can mitigate the conservativeness inherent to open‐loop robust MPC. Yet, the scenarios are often generated offline based on worst‐case uncertainty descriptions obtaineda priori, which can in turn limit the improvements in the robust control performance. To this end, this paper presents a learning‐based, adaptive‐scenario‐tree model predictive control approach for uncertain nonlinear systems with time‐varying and/or hard‐to‐model dynamics. Bayesian neural networks (BNNs) are used to learn a state‐ and input‐dependent description of model uncertainty, namely the mismatch between a nominal (physics‐based or data‐driven) model of a system and its actual dynamics. We first present a new approach for training robust BNNs (RBNNs) using probabilistic Lipschitz bounds to provide a less conservative uncertainty quantification. Then, we present an approach to evaluate the credible intervals of RBNN predictions and determine the number of samples required for estimating the credible intervals given a credible level. The performance of RBNNs is evaluated with respect to that of standard BNNs and Gaussian process (GP) as a basis of comparison. The RBNN description of plant‐model mismatch with verified accurate credible intervals is employed to generate adaptive scenarios online for scenario‐based MPC (sMPC). The proposed sMPC approach with adaptive scenario tree can improve the robust control performance with respect to sMPC with a fixed, worst‐case scenario tree and with respect to an adaptive‐scenario‐based MPC (asMPC) using GP regression on a cold atmospheric plasma system. Furthermore, closed‐loop simulation results illustrate that robust model uncertainty learning via RBNNs can enhance the probability of constraint satisfaction of asMPC. 
    more » « less