skip to main content

Title: A conditional density estimation partition model using logistic Gaussian processes
Summary Conditional density estimation seeks to model the distribution of a response variable conditional on covariates. We propose a Bayesian partition model using logistic Gaussian processes to perform conditional density estimation. The partition takes the form of a Voronoi tessellation and is learned from the data using a reversible jump Markov chain Monte Carlo algorithm. The methodology models data in which the density changes sharply throughout the covariate space, and can be used to determine where important changes in the density occur. The Markov chain Monte Carlo algorithm involves a Laplace approximation on the latent variables of the logistic Gaussian process model which marginalizes the parameters in each partition element, allowing an efficient search of the approximate posterior distribution of the tessellation. The method is consistent when the density is piecewise constant in the covariate space or when the density is Lipschitz continuous with respect to the covariates. In simulation and application to wind turbine data, the model successfully estimates the partition structure and conditional distribution.
Authors:
; ; ;
Award ID(s):
1934904
Publication Date:
NSF-PAR ID:
10178809
Journal Name:
Biometrika
Volume:
107
Issue:
1
Page Range or eLocation-ID:
173 to 190
ISSN:
0006-3444
Sponsoring Org:
National Science Foundation
More Like this
  1. We propose a Bayesian decision making framework for control of Markov Decision Processes (MDPs) with unknown dynamics and large, possibly continuous, state, action, and parameter spaces in data-poor environments. Most of the existing adaptive controllers for MDPs with unknown dynamics are based on the reinforcement learning framework and rely on large data sets acquired by sustained direct interaction with the system or via a simulator. This is not feasible in many applications, due to ethical, economic, and physical constraints. The proposed framework addresses the data poverty issue by decomposing the problem into an offline planning stage that does not relymore »on sustained direct interaction with the system or simulator and an online execution stage. In the offline process, parallel Gaussian process temporal difference (GPTD) learning techniques are employed for near-optimal Bayesian approximation of the expected discounted reward over a sample drawn from the prior distribution of unknown parameters. In the online stage, the action with the maximum expected return with respect to the posterior distribution of the parameters is selected. This is achieved by an approximation of the posterior distribution using a Markov Chain Monte Carlo (MCMC) algorithm, followed by constructing multiple Gaussian processes over the parameter space for efficient prediction of the means of the expected return at the MCMC sample. The effectiveness of the proposed framework is demonstrated using a simple dynamical system model with continuous state and action spaces, as well as a more complex model for a metastatic melanoma gene regulatory network observed through noisy synthetic gene expression data.« less
  2. Stochastic gradient Langevin dynamics (SGLD) and stochastic gradient Hamiltonian Monte Carlo (SGHMC) are two popular Markov Chain Monte Carlo (MCMC) algorithms for Bayesian inference that can scale to large datasets, allowing to sample from the posterior distribution of the parameters of a statistical model given the input data and the prior distribution over the model parameters. However, these algorithms do not apply to the decentralized learning setting, when a network of agents are working collaboratively to learn the parameters of a statistical model without sharing their individual data due to privacy reasons or communication constraints. We study two algorithms: Decentralizedmore »SGLD (DE-SGLD) and Decentralized SGHMC (DE-SGHMC) which are adaptations of SGLD and SGHMC methods that allow scaleable Bayesian inference in the decentralized setting for large datasets. We show that when the posterior distribution is strongly log-concave and smooth, the iterates of these algorithms converge linearly to a neighborhood of the target distribution in the 2-Wasserstein distance if their parameters are selected appropriately. We illustrate the efficiency of our algorithms on decentralized Bayesian linear regression and Bayesian logistic regression problems« less
  3. Markov chain Monte Carlo algorithms have important applications in counting problems and in machine learning problems, settings that involve estimating quantities that are difficult to compute exactly. How much can quantum computers speed up classical Markov chain algorithms? In this work we consider the problem of speeding up simulated annealing algorithms, where the stationary distributions of the Markov chains are Gibbs distributions at temperatures specified according to an annealing schedule. We construct a quantum algorithm that both adaptively constructs an annealing schedule and quantum samples at each temperature. Our adaptive annealing schedule roughly matches the length of the best classicalmore »adaptive annealing schedules and improves on nonadaptive temperature schedules by roughly a quadratic factor. Our dependence on the Markov chain gap matches other quantum algorithms and is quadratically better than what classical Markov chains achieve. Our algorithm is the first to combine both of these quadratic improvements. Like other quantum walk algorithms, it also improves on classical algorithms by producing “qsamples” instead of classical samples. This means preparing quantum states whose amplitudes are the square roots of the target probability distribution. In constructing the annealing schedule we make use of amplitude estimation, and we introduce a method for making amplitude estimation nondestructive at almost no additional cost, a result that may have independent interest. Finally we demonstrate how this quantum simulated annealing algorithm can be applied to the problems of estimating partition functions and Bayesian inference.« less
  4. Summary Stochastic gradient Markov chain Monte Carlo algorithms have received much attention in Bayesian computing for big data problems, but they are only applicable to a small class of problems for which the parameter space has a fixed dimension and the log-posterior density is differentiable with respect to the parameters. This paper proposes an extended stochastic gradient Markov chain Monte Carlo algorithm which, by introducing appropriate latent variables, can be applied to more general large-scale Bayesian computing problems, such as those involving dimension jumping and missing data. Numerical studies show that the proposed algorithm is highly scalable and much moremore »efficient than traditional Markov chain Monte Carlo algorithms.« less
  5. Electrification of vehicles is becoming one of the main avenues for decarbonization of the transportation market. To reduce stress on the energy grid, large-scale charging will require optimal scheduling of when electricity is delivered to vehicles. Coordinated electric-vehicle charging can produce optimal, flattened loads that would improve reliability of the power system as well as reduce system costs and emissions. However, a challenge for successful introduction of coordinated deadline-scheduling of residential charging comes from the demand side: customers would need to be willing both to defer charging their vehicles and to accept less than a 100% target for battery charge.more »Within a coordinated electric-vehicle charging pilot run by the local utility in upstate New York, this study analyzes the necessary incentives for customers to accept giving up control of when charging of their vehicles takes place. Using data from a choice experiment implemented in an online survey of electric-vehicle owners and lessees in upstate New York (N=462), we make inference on the willingness to pay for features of hypothetical coordinated electric-vehicle charging programs. To address unobserved preference heterogeneity, we apply Variational Bayes (VB) inference to a mixed logit model. Stochastic variational inference has recently emerged as a fast and computationally-efficient alternative to Markov chain Monte Carlo (MCMC) methods for scalable Bayesian estimation of discrete choice models. Our results show that individuals negatively perceive the duration of the timeframe in which the energy provider would be allowed to defer charging, even though both the desired target for battery charge and deadline would be respected. This negative monetary valuation is evidenced by an expected average reduction in the annual fee of joining the charging program of $2.64 per hour of control yielded to the energy provider. Our results also provide evidence of substantial heterogeneity in preferences. For example, the 25% quantile of the posterior distribution of the mean of the willingness to accept an additional hour of control yielded to the utility is $5.16. However, the negative valuation of the timeframe for deferring charging is compensated by positive valuation of emission savings coming from switching charging to periods of the day with a higher proportion of generation from renewable sources. Customers also positively valued discounts in the price of energy delivery.« less