skip to main content


Title: Importance sampling in reinforcement learning with an estimated behavior policy
Abstract

In reinforcement learning, importance sampling is a widely used method for evaluating an expectation under the distribution of data of one policy when the data has in fact been generated by a different policy. Importance sampling requires computing the likelihood ratio between the action probabilities of a target policy and those of the data-producing behavior policy. In this article, we study importance sampling where the behavior policy action probabilities are replaced by their maximum likelihood estimate of these probabilities under the observed data. We show this general technique reduces variance due to sampling error in Monte Carlo style estimators. We introduce two novel estimators that use this technique to estimate expected values that arise in the RL literature. We find that these general estimators reduce the variance of Monte Carlo sampling methods, leading to faster learning for policy gradient algorithms and more accurate off-policy policy evaluation. We also provide theoretical analysis showing that our new estimators are consistent and have asymptotically lower variance than Monte Carlo estimators.

 
more » « less
NSF-PAR ID:
10226772
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Machine Learning
Volume:
110
Issue:
6
ISSN:
0885-6125
Page Range / eLocation ID:
p. 1267-1317
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Fourier analysis is gaining popularity in image synthesis as a tool for the analysis of error in Monte Carlo (MC) integration. Still, existing tools are only able to analyse convergence under simplifying assumptions (such as randomized shifts) which are not applied in practice during rendering. We reformulate the expressions for bias and variance of sampling‐based integrators to unify non‐uniform sample distributions [importance sampling (IS)] as well as correlations between samples while respecting finite sampling domains. Our unified formulation hints at fundamental limitations of Fourier‐based tools in performing variance analysis for MC integration. At the same time, it reveals that, when combined with correlated sampling, IS can impact convergence rate by introducing or inhibiting discontinuities in the integrand. We demonstrate that the convergence of multiple importance sampling (MIS) is determined by the strategy which converges slowest and propose several simple approaches to overcome this limitation. We show that smoothing light boundaries (as commonly done in production to reduce variance) can improve (M)IS convergence (at a cost of introducing a small amount of bias) since it removesC0discontinuities within the integration domain. We also propose practical integrand‐ and sample‐mirroring approaches which cancel the impact of boundary discontinuities on the convergence rate of estimators.

     
    more » « less
  2. We estimate the parameter of a stationary time series process by minimizing the integrated weighted mean squared error between the empirical and simulated characteristic function, when the true characteristic functions cannot be explicitly computed. Motivated by Indirect Inference, we use a Monte Carlo approximation of the characteristic function based on i.i.d. simulated blocks. As a classical variance reduction technique, we propose the use of control variates for reducing the variance of this Monte Carlo approximation. These two approximations yield two new estimators that are applicable to a large class of time series processes. We show consistency and asymptotic normality of the parameter estimators under strong mixing, moment conditions, and smoothness of the simulated blocks with respect to its parameter. In a simulation study we show the good performance of these new simulation based estimators, and the superiority of the control variates based estimator for Poisson driven time series of counts.

     
    more » « less
  3. ABSTRACT

    We introduce a novel approach to boost the efficiency of the importance nested sampling (INS) technique for Bayesian posterior and evidence estimation using deep learning. Unlike rejection-based sampling methods such as vanilla nested sampling (NS) or Markov chain Monte Carlo (MCMC) algorithms, importance sampling techniques can use all likelihood evaluations for posterior and evidence estimation. However, for efficient importance sampling, one needs proposal distributions that closely mimic the posterior distributions. We show how to combine INS with deep learning via neural network regression to accomplish this task. We also introduce nautilus, a reference open-source python implementation of this technique for Bayesian posterior and evidence estimation. We compare nautilus against popular NS and MCMC packages, including emcee, dynesty, ultranest, and pocomc, on a variety of challenging synthetic problems and real-world applications in exoplanet detection, galaxy SED fitting and cosmology. In all applications, the sampling efficiency of nautilus is substantially higher than that of all other samplers, often by more than an order of magnitude. Simultaneously, nautilus delivers highly accurate results and needs fewer likelihood evaluations than all other samplers tested. We also show that nautilus has good scaling with the dimensionality of the likelihood and is easily parallelizable to many CPUs.

     
    more » « less
  4. Motivated by the many real-world applications of reinforcement learning (RL) that require safe-policy iterations, we consider the problem of off-policy evaluation (OPE) — the problem of evaluating a new policy using the historical data ob- tained by different behavior policies — under the model of nonstationary episodic Markov Decision Processes (MDP) with a long horizon and a large action space. Existing importance sampling (IS) methods often suffer from large variance that depends exponentially on the RL horizon H. To solve this problem, we consider a marginalized importance sampling (MIS) estimator that recursively estimates the state marginal distribution for the target policy at every step. MIS achieves a mean-squared error of [ ] where μ and π are the logging and target policies, dμt (st) and dπt (st) are the marginal distribution of the state at tth step, H is the horizon, n is the sample size and V π is the value function of the MDP under π. The result matches the t+1 Cramer-Rao lower bound in Jiang and Li [2016] up to a multiplicative factor of H. To the best of our knowledge, this is the first OPE estimation error bound with a polynomial dependence on H . Besides theory, we show empirical superiority of our method in time-varying, partially observable, and long-horizon RL environments. 
    more » « less
  5. Many Markov Chain Monte Carlo (MCMC) methods leverage gradient information of the potential function of target distribution to explore sample space efficiently. However, computing gradients can often be computationally expensive for large scale applications, such as those in contemporary machine learning. Stochastic Gradient (SG-)MCMC methods approximate gradients by stochastic ones, commonly via uniformly subsampled data points, and achieve improved computational efficiency, however at the price of introducing sampling error. We propose a non-uniform subsampling scheme to improve the sampling accuracy. The proposed exponentially weighted stochastic gradient (EWSG) is designed so that a non-uniform-SG-MCMC method mimics the statistical behavior of a batch-gradient-MCMC method, and hence the inaccuracy due to SG approximation is reduced. EWSG differs from classical variance reduction (VR) techniques as it focuses on the entire distribution instead of just the variance; nevertheless, its reduced local variance is also proved. EWSG can also be viewed as an extension of the importance sampling idea, successful for stochastic-gradient-based optimizations, to sampling tasks. In our practical implementation of EWSG, the non-uniform subsampling is performed efficiently via a Metropolis-Hastings chain on the data index, which is coupled to the MCMC algorithm. Numerical experiments are provided, not only to demonstrate EWSG's effectiveness, but also to guide hyperparameter choices, and validate our non-asymptotic global error bound despite of approximations in the implementation. Notably, while statistical accuracy is improved, convergence speed can be comparable to the uniform version, which renders EWSG a practical alternative to VR (but EWSG and VR can be combined too).

     
    more » « less