We consider hyper-differential sensitivity analysis (HDSA) of nonlinear Bayesian inverse problems governed by partialdifferential equations (PDEs) with infinite-dimensional parameters. In previous works, HDSA has been used to assessthe sensitivity of the solution of deterministic inverse problems to additional model uncertainties and also different types of measurement data. In the present work, we extend HDSA to the class of Bayesian inverse problems governed by PDEs. The focus is on assessing the sensitivity of certain key quantities derived from the posterior distribution. Specifically, we focus on analyzing the sensitivity of the MAP point and the Bayes risk and make full use of the information embedded in the Bayesian inverse problem. After establishing our mathematical framework for HDSA of Bayesian inverse problems, we present a detailed computational approach for computing the proposed HDSA indices. We examine the effectiveness of the proposed approach on an inverse problem governed by a PDE modeling heat conduction.
more »
« less
Hyper-differential sensitivity analysis for inverse problems constrained by partial differential equations
Abstract High fidelity models used in many science and engineering applications couple multiple physical states and parameters. Inverse problems arise when a model parameter cannot be determined directly, but rather is estimated using (typically sparse and noisy) measurements of the states. The data is usually not sufficient to simultaneously inform all of the parameters. Consequently, the governing model typically contains parameters which are uncertain but must be specified for a complete model characterization necessary to invert for the parameters of interest. We refer to the combination of the additional model parameters (those which are not inverted for) and the measured data states as the ‘complementary parameters’. We seek to quantify the relative importance of these complementary parameters to the solution of the inverse problem. To address this, we present a framework based on hyper-differential sensitivity analysis (HDSA). HDSA computes the derivative of the solution of an inverse problem with respect to complementary parameters. We present a mathematical framework for HDSA in large-scale PDE-constrained inverse problems and show how HDSA can be interpreted to give insight about the inverse problem. We demonstrate the effectiveness of the method on an inverse problem by estimating a permeability field, using pressure and concentration measurements, in a porous medium flow application with uncertainty in the boundary conditions, source injection, and diffusion coefficient.
more »
« less
- Award ID(s):
- 1745654
- PAR ID:
- 10420395
- Date Published:
- Journal Name:
- Inverse Problems
- Volume:
- 36
- Issue:
- 12
- ISSN:
- 0266-5611
- Page Range / eLocation ID:
- 125001
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Bayesian inference provides a systematic framework for integration of data with mathematical models to quantify the uncertainty in the solution of the inverse problem. However, the solution of Bayesian inverse problems governed by complex forward models described by partial differential equations (PDEs) remains prohibitive with black-box Markov chain Monte Carlo (MCMC) methods. We present hIPPYlib-MUQ, an extensible and scalable software framework that contains implementations of state-of-the art algorithms aimed to overcome the challenges of high-dimensional, PDE-constrained Bayesian inverse problems. These algorithms accelerate MCMC sampling by exploiting the geometry and intrinsic low-dimensionality of parameter space via derivative information and low rank approximation. The software integrates two complementary open-source software packages, hIPPYlib and MUQ. hIPPYlib solves PDE-constrained inverse problems using automatically-generated adjoint-based derivatives, but it lacks full Bayesian capabilities. MUQ provides a spectrum of powerful Bayesian inversion models and algorithms, but expects forward models to come equipped with gradients and Hessians to permit large-scale solution. By combining these two complementary libraries, we created a robust, scalable, and efficient software framework that realizes the benefits of each and allows us to tackle complex large-scale Bayesian inverse problems across a broad spectrum of scientific and engineering disciplines. To illustrate the capabilities of hIPPYlib-MUQ, we present a comparison of a number of MCMC methods available in the integrated software on several high-dimensional Bayesian inverse problems. These include problems characterized by both linear and nonlinear PDEs, various noise models, and different parameter dimensions. The results demonstrate that large (∼ 50×) speedups over conventional black box and gradient-based MCMC algorithms can be obtained by exploiting Hessian information (from the log-posterior), underscoring the power of the integrated hIPPYlib-MUQ framework.more » « less
-
In this paper, we consider iterative methods based on sampling for computing solutions to separable nonlinear inverse problems where the entire dataset cannot be accessed or is not available all-at-once. In such scenarios (e.g., when massive amounts of data exceed memory capabilities or when data is being streamed), solving inverse problems, especially nonlinear ones, can be very challenging. We focus on separable nonlinear problems, where the objective function is nonlinear in one (typically small) set of parameters and linear in another (larger) set of parameters. For the linear problem, we describe a limited-memory sampled Tikhonov method, and for the nonlinear problem, we describe an approach to integrate the limited-memory sampled Tikhonov method within a nonlinear optimization framework. The proposed method is computationally efficient in that it only uses available data at any iteration to update both sets of parameters. Numerical experiments applied to massive super-resolution image reconstruction problems show the power of these methods.more » « less
-
We introduce a Monte Carlo method for computing derivatives of the solution to a partial differential equation (PDE) with respect to problem parameters (such as domain geometry or boundary conditions). Derivatives can be evaluated at arbitrary points, without performing a global solve or constructing a volumetric grid or mesh. The method is hence well suited to inverse problems with complex geometry, such as PDE-constrained shape optimization. Like other walk on spheres (WoS) algorithms, our method is trivial to parallelize, and is agnostic to boundary representation (meshes, splines, implicit surfaces, etc.), supporting large topological changes. We focus in particular on screened Poisson equations, which model diverse problems from scientific and geometric computing. As in differentiable rendering, we jointly estimate derivatives with respect to all parameters—hence, cost does not grow significantly with parameter count. In practice, even noisy derivative estimates exhibit fast, stable convergence for stochastic gradient-based optimization, as we show through examples from thermal design, shape from diffusion, and computer graphics.more » « less
-
We introduce a Monte Carlo method for computing derivatives of the solution to a partial differential equation (PDE) with respect to problem parameters (such as domain geometry or boundary conditions). Derivatives can be evaluated at arbitrary points, without performing a global solve or constructing a volumetric grid or mesh. The method is hence well suited to inverse problems with complex geometry, such as PDE-constrained shape optimization. Like otherwalk on spheres (WoS)algorithms, our method is trivial to parallelize, and is agnostic to boundary representation (meshes, splines, implicit surfaces,etc.), supporting large topological changes. We focus in particular on screened Poisson equations, which model diverse problems from scientific and geometric computing. As in differentiable rendering, we jointly estimate derivatives with respect to all parameters---hence, cost does not grow significantly with parameter count. In practice, even noisy derivative estimates exhibit fast, stable convergence for stochastic gradient-based optimization, as we show through examples from thermal design, shape from diffusion, and computer graphics.more » « less
An official website of the United States government

