skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 2, 2026

Title: Unique reconstruction for discretized inverse problems: a random sketching approach via subsampling
Abstract Computational inverse problems utilize a finite number of measurements to infer a discrete approximation of the unknown parameter function. With motivation from the setting of PDE-based optimization, we study the unique reconstruction of discretized inverse problems by examining the positivity of the Hessian matrix. What is the reconstruction power of a fixed number of data observations? How many parameters can one reconstruct? Here we describe a probabilistic approach, and spell out the interplay of the observation size (r) and the number of parameters to be uniquely identified (m). The technical pillar here is the random sketching strategy, in which the matrix concentration inequality and sampling theory are largely employed. By analyzing a randomly subsampled Hessian matrix, we attain a well-conditioned reconstruction problem with high probability. Our main theory is validated in numerical experiments, using an elliptic inverse problem as an example.  more » « less
Award ID(s):
2308440 2324368
PAR ID:
10612706
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
Inverse Problems
Volume:
41
Issue:
4
ISSN:
0266-5611
Page Range / eLocation ID:
045010
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Obtaining lightweight and accurate approximations of discretized objective functional Hessians in inverse problems governed by partial differential equations (PDEs) is essential to make both deterministic and Bayesian statistical large-scale inverse problems computationally tractable. The cubic computational complexity of dense linear algebraic tasks, such as Cholesky factorization, that provide a means to sample Gaussian distributions and determine solutions of Newton linear systems is a computational bottleneck at large-scale. These tasks can be reduced to log-linear complexity by utilizing hierarchical off-diagonal low-rank (HODLR) matrix approximations. In this work, we show that a class of Hessians that arise from inverse problems governed by PDEs are well approximated by the HODLR matrix format. In particular, we study inverse problems governed by PDEs that model the instantaneous viscous flow of ice sheets. In these problems, we seek a spatially distributed basal sliding parameter field such that the flow predicted by the ice sheet model is consistent with ice sheet surface velocity observations. We demonstrate the use of HODLR Hessian approximation to efficiently sample the Laplace approximation of the posterior distribution with covariance further approximated by HODLR matrix compression. Computational studies are performed which illustrate ice sheet problem regimes for which the Gauss–Newton data-misfit Hessian is more efficiently approximated by the HODLR matrix format than the low-rank (LR) format. We then demonstrate that HODLR approximations can be favorable, when compared to global LR approximations, for large-scale problems by studying the data-misfit Hessian associated with inverse problems governed by the first-order Stokes flow model on the Humboldt glacier and Greenland ice sheet. 
    more » « less
  2. null (Ed.)
    We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms. 
    more » « less
  3. Abstract In linear regression, the coefficients are simple to estimate using the least squares method with a known design matrix for the observed measurements. However, real-world applications may encounter complications such as an unknown design matrix and complex-valued parameters. The design matrix can be estimated from prior information but can potentially cause an inverse problem when multiplying by the transpose as it is generally ill-conditioned. This can be combat by adding regularizers to the model but does not always mitigate the issues. Here, we propose our Bayesian approach to a complex-valued latent variable linear model with an application to functional magnetic resonance imaging (fMRI) image reconstruction. The complex-valued linear model and our Bayesian model are evaluated through extensive simulations and applied to experimental fMRI data. 
    more » « less
  4. Abstract Many inverse problems are naturally formulated as a PDE-constrained optimization problem. These non-linear, large-scale, constrained optimization problems know many challenges, of which the inherent non-linearity of the problem is an important one. In this paper, we focus on a relaxed formulation of the PDE-constrained optimization problem and provide analysis for its properties including convexity under certain assumptions. Starting from an infinite-dimensional formulation of the inverse problem with discrete data, we propose a general framework for the analysis and discretisation of such problems. The relaxed formulation of the PDE-constrained optimization problem is shown to reduce to a weighted non-linear least-squares problem. The weight matrix turns out to be the Gram matrix of solutions of the PDE, and in some cases be estimated directly from the measurements. The latter observation points to a potential way to unify recently proposed data-driven reduced-order models for inverse problems with PDE-constrained optimization. We provide a number of representative case studies and numerical examples to illustrate our findings. 
    more » « less
  5. Summary Uncertainty quantification for linear inverse problems remains a challenging task, especially for problems with a very large number of unknown parameters (e.g., dynamic inverse problems) and for problems where computation of the square root and inverse of the prior covariance matrix are not feasible. This work exploits Krylov subspace methods to develop and analyze new techniques for large‐scale uncertainty quantification in inverse problems. In this work, we assume that generalized Golub‐Kahan‐based methods have been used to compute an estimate of the solution, and we describe efficient methods to explore the posterior distribution. In particular, we use the generalized Golub‐Kahan bidiagonalization to derive an approximation of the posterior covariance matrix, and we provide theoretical results that quantify the accuracy of the approximate posterior covariance matrix and of the resulting posterior distribution. Then, we describe efficient methods that use the approximation to compute measures of uncertainty, including the Kullback‐Liebler divergence. We present two methods that use the preconditioned Lanczos algorithm to efficiently generate samples from the posterior distribution. Numerical examples from dynamic photoacoustic tomography demonstrate the effectiveness of the described approaches. 
    more » « less