Uncertainty quantification for linear inverse problems remains a challenging task, especially for problems with a very large number of unknown parameters (e.g., dynamic inverse problems) and for problems where computation of the square root and inverse of the prior covariance matrix are not feasible. This work exploits Krylov subspace methods to develop and analyze new techniques for large‐scale uncertainty quantification in inverse problems. In this work, we assume that generalized Golub‐Kahan‐based methods have been used to compute an estimate of the solution, and we describe efficient methods to explore the posterior distribution. In particular, we use the generalized Golub‐Kahan bidiagonalization to derive an approximation of the posterior covariance matrix, and we provide theoretical results that quantify the accuracy of the approximate posterior covariance matrix and of the resulting posterior distribution. Then, we describe efficient methods that use the approximation to compute measures of uncertainty, including the Kullback‐Liebler divergence. We present two methods that use the preconditioned Lanczos algorithm to efficiently generate samples from the posterior distribution. Numerical examples from dynamic photoacoustic tomography demonstrate the effectiveness of the described approaches.
In this work, we describe a new approach that uses variational encoder-decoder (VED) networks for efficient uncertainty quantification for
- NSF-PAR ID:
- 10515413
- Publisher / Repository:
- IOP Publishing
- Date Published:
- Journal Name:
- Inverse Problems
- Volume:
- 40
- Issue:
- 7
- ISSN:
- 0266-5611
- Format(s):
- Medium: X Size: Article No. 075010
- Size(s):
- Article No. 075010
- Sponsoring Org:
- National Science Foundation
More Like this
-
Summary -
Mariño, Inés P. (Ed.)In many physiological systems, real-time endogeneous and exogenous signals in living organisms provide critical information and interpretations of physiological functions; however, these signals or variables of interest are not directly accessible and must be estimated from noisy, measured signals. In this paper, we study an inverse problem of recovering gas exchange signals of animals placed in a flow-through respirometry chamber from measured gas concentrations. For large-scale experiments (e.g., long scans with high sampling rate) that have many uncertainties (e.g., noise in the observations or an unknown impulse response function), this is a computationally challenging inverse problem. We first describe various computational tools that can be used for respirometry reconstruction and uncertainty quantification when the impulse response function is known. Then, we address the more challenging problem where the impulse response function is not known or only partially known. We describe nonlinear optimization methods for reconstruction, where both the unknown model parameters and the unknown signal are reconstructed simultaneously. Numerical experiments show the benefits and potential impacts of these methods in respirometry.more » « less
-
Abstract We introduce the concept of decision‐focused surrogate modeling for solving computationally challenging nonlinear optimization problems in real‐time settings. The proposed data‐driven framework seeks to learn a simpler, for example, convex, surrogate optimization model that is trained to minimize the
decision prediction error , which is defined as the difference between the optimal solutions of the original and the surrogate optimization models. The learning problem, formulated as a bilevel program, can be viewed as a data‐driven inverse optimization problem to which we apply a decomposition‐based solution algorithm from previous work. We validate our framework through numerical experiments involving the optimization of common nonlinear chemical processes such as chemical reactors, heat exchanger networks, and material blending systems. We also present a detailed comparison of decision‐focused surrogate modeling with standard data‐driven surrogate modeling methods and demonstrate that our approach is significantly more data‐efficient while producing simple surrogate models with high decision prediction accuracy. -
Abstract The inverse problem for radiative transfer is important in many applications, such as optical tomography and remote sensing. Major challenges include large memory requirements and computational expense, which arise from high-dimensionality and the need for iterations in solving the inverse problem. Here, to alleviate these issues, we propose adaptive-mesh inversion: a goal-oriented
hp -adaptive mesh refinement method for solving inverse radiative transfer problems. One novel aspect here is that the two optimizations (one for inversion, and one for mesh adaptivity) are treated simultaneously and blended together. By exploiting the connection between duality-based mesh adaptivity and adjoint-based inversion techniques, we propose a goal-oriented error estimator, which is cheap to compute, and can efficiently guide the mesh-refinement to numerically solve the inverse problem. We use discontinuous Galerkin spectral element methods to discretize the forward and the adjoint problems. Then, based on the goal-oriented error estimator, we propose anhp -adaptive algorithm to refine the meshes. Numerical experiments are presented at the end and show convergence speed-up and reduced memory occupation by the goal-oriented mesh adaptive method. -
In this paper, we consider iterative methods based on sampling for computing solutions to separable nonlinear inverse problems where the entire dataset cannot be accessed or is not available all-at-once. In such scenarios (e.g., when massive amounts of data exceed memory capabilities or when data is being streamed), solving inverse problems, especially nonlinear ones, can be very challenging. We focus on separable nonlinear problems, where the objective function is nonlinear in one (typically small) set of parameters and linear in another (larger) set of parameters. For the linear problem, we describe a limited-memory sampled Tikhonov method, and for the nonlinear problem, we describe an approach to integrate the limited-memory sampled Tikhonov method within a nonlinear optimization framework. The proposed method is computationally efficient in that it only uses available data at any iteration to update both sets of parameters. Numerical experiments applied to massive super-resolution image reconstruction problems show the power of these methods.more » « less