skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Bayesian optimization of functional output in inverse problems
Motivated by the parameter identification problem of a reaction-diffusion transport model in a vapor phase infiltration processes, we propose a Bayesian optimization procedure for solving the inverse problem that aims to find an input setting that achieves a desired functional output. The proposed algorithm improves over the standard single-objective Bayesian optimization by (i) utilizing the generalized chi-square distribution as a more appropriate predictive distribution for the squared distance objective function in the inverse problems, and (ii) applying functional principal component analysis to reduce the dimensionality of the functional response data, which allows for efficient approximation of the predictive distribution and the subsequent computation of the expected improvement acquisition function.  more » « less
Award ID(s):
1921873
PAR ID:
10295068
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Optimization and Engineering
ISSN:
1389-4420
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we develop an optimal weight adap- tation strategy of model predictive control (MPC) for connected and automated vehicles (CAVs) in mixed traffic. We model the interaction between a CAV and a human-driven vehicle (HDV) as a simultaneous game and formulate a game-theoretic MPC problem to find a Nash equilibrium of the game. In the MPC problem, the weights in the HDV’s objective function can be learned online using moving horizon inverse reinforcement learning. Using Bayesian optimization, we propose a strategy to optimally adapt the weights in the CAV’s objective function so that the expected true cost when using MPC in simulations can be minimized. We validate the effectiveness of the optimal strategy by numerical simulations of a vehicle crossing example at an unsignalized intersection. 
    more » « less
  2. Randomized algorithms exploit stochasticity to reduce computational complexity. One important example is random feature regression (RFR) that accelerates Gaussian process regression (GPR). RFR approximates an unknown function with a random neural network whose hidden weights and biases are sampled from a probability distribution. Only the final output layer is fit to data. In randomized algorithms like RFR, the hyperparameters that characterize the sampling distribution greatly impact performance, yet are not directly accessible from samples. This makes optimization of hyperparameters via standard (gradient-based) optimization tools inapplicable. Inspired by Bayesian ideas from GPR, this paper introduces a random objective function that is tailored for hyperparameter tuning of vector-valued random features. The objective is minimized with ensemble Kalman inversion (EKI). EKI is a gradient-free particle-based optimizer that is scalable to high-dimensions and robust to randomness in objective functions. A numerical study showcases the new black-box methodology to learn hyperparameter distributions in several problems that are sensitive to the hyperparameter selection: two global sensitivity analyses, integrating a chaotic dynamical system, and solving a Bayesian inverse problem from atmospheric dynamics. The success of the proposed EKI-based algorithm for RFR suggests its potential for automated optimization of hyperparameters arising in other randomized algorithms. 
    more » « less
  3. We consider optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs) under model uncertainty. Specifically, we consider inverse problems in which, in addition to the inversion parameters, the governing PDEs include secondary uncertain parameters. We focus on problems with infinite-dimensional inversion and secondary parameters and present a scalable computational framework for optimal design of such problems. The proposed approach enables Bayesian inversion and OED under uncertainty within a unified framework. We build on the Bayesian approximation error (BAE) approach, to incorporate modeling uncertainties in the Bayesian inverse problem, and methods for A-optimal design of infinite-dimensional Bayesian nonlinear inverse problems. Specifically, a Gaussian approximation to the posterior at the maximuma posterioriprobability point is used to define an uncertainty aware OED objective that is tractable to evaluate and optimize. In particular, the OED objective can be computed at a cost, in the number of PDE solves, that does not grow with the dimension of the discretized inversion and secondary parameters. The OED problem is formulated as a binary bilevel PDE constrained optimization problem and a greedy algorithm, which provides a pragmatic approach, is used to find optimal designs. We demonstrate the effectiveness of the proposed approach for a model inverse problem governed by an elliptic PDE on a three-dimensional domain. Our computational results also highlight the pitfalls of ignoring modeling uncertainties in the OED and/or inference stages. 
    more » « less
  4. null (Ed.)
    Seismic full-waveform inversion aims to reconstruct subsurface medium parameters from recorded seismic data. It is solved as a constrained optimization problem in the deterministic approach. Many different objective functions have been proposed to tackle the nonconvexity that originated from the cycle-skipping issues. The analogy between objective functions in the deterministic inversion and likelihood functions in Bayesian inversion motivates us to analyze the noise model each objective function accounts for under the Bayesian inference setting. We also show the existence and wellposedness of their corresponding posterior measures. In particular, the theorem shows that theWasserstein-type likelihood offers better stability with respect to the noise in the recorded data. Together with an application of the level-set prior, we demonstrate by numerical examples the successful reconstruction from Bayesian full-waveform inversion under the proper choices of the likelihood function and the prior distribution. 
    more » « less
  5. Bayesian optimization is a class of global optimization techniques. In Bayesian optimization, the underlying objective function is modeled as a realization of a Gaussian process. Although the Gaussian process assumption implies a random distribution of the Bayesian optimization outputs, quantification of this uncertainty is rarely studied in the literature. In this work, we propose a novel approach to assess the output uncertainty of Bayesian optimization algorithms, which proceeds by constructing confidence regions of the maximum point (or value) of the objective function. These regions can be computed efficiently, and their confidence levels are guaranteed by the uniform error bounds for sequential Gaussian process regression newly developed in the present work. Our theory provides a unified uncertainty quantification framework for all existing sequential sampling policies and stopping criteria. 
    more » « less