skip to main content


Title: Elliptic Inverse Problems of Identifying Nonlinear Parameters
Inverse problems of identifying parameters in partial differential equations (PDEs) is an important class of problems with many real-world applications. Inverse problems are commonly studied in optimization setting with various known approaches having their advantages and disadvantages. Although a non-convex output least-squares (OLS) objective has often been used, a convex modified output least-squares (MOLS) attracted quite an attention in recent years. However, the convexity of the MOLS has only been established for parameters appearing linearly in the PDEs. The primary objective of this work is to introduce and analyze a variant of the MOLS for the inverse problem of identifying parameters that appear nonlinearly in variational problems. Besides giving an existence result for the inverse problem, we derive the first-order and second-order derivative formulas for the new functional and use them to identify the conditions under which the new functional is convex. We give a discretization scheme for the continuous inverse problem and prove its convergence. We also obtain discrete formulas for the new MOLS functional and present detailed numerical examples.  more » « less
Award ID(s):
1720067
NSF-PAR ID:
10063853
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Pure and Applied Functional Analysis
Volume:
3
Issue:
2
Page Range / eLocation ID:
309-326
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider hyper-differential sensitivity analysis (HDSA) of nonlinear Bayesian inverse problems governed by partialdifferential equations (PDEs) with infinite-dimensional parameters. In previous works, HDSA has been used to assessthe sensitivity of the solution of deterministic inverse problems to additional model uncertainties and also different types of measurement data. In the present work, we extend HDSA to the class of Bayesian inverse problems governed by PDEs. The focus is on assessing the sensitivity of certain key quantities derived from the posterior distribution. Specifically, we focus on analyzing the sensitivity of the MAP point and the Bayes risk and make full use of the information embedded in the Bayesian inverse problem. After establishing our mathematical framework for HDSA of Bayesian inverse problems, we present a detailed computational approach for computing the proposed HDSA indices. We examine the effectiveness of the proposed approach on an inverse problem governed by a PDE modeling heat conduction.

     
    more » « less
  2. A class of nonlinear, stochastic staticization control problems (including minimization problems with smooth, convex, coercive payoffs) driven by diffusion dynamics with constant diffusion coefficient is considered. A fundamental solution form is obtained where the same solution can be used for a limited variety of terminal costs without re-solution of the problem. One may convert this fundamental solution form from a stochastic control problem form to a deterministic control problem form. This yields an equivalence between certain second-order (in space) Hamilton-Jacobi partial differential equations (HJ PDEs) and associated first-order HJ PDEs. This reformulation has substantial numerical implications. 
    more » « less
  3. Abstract Obtaining lightweight and accurate approximations of discretized objective functional Hessians in inverse problems governed by partial differential equations (PDEs) is essential to make both deterministic and Bayesian statistical large-scale inverse problems computationally tractable. The cubic computational complexity of dense linear algebraic tasks, such as Cholesky factorization, that provide a means to sample Gaussian distributions and determine solutions of Newton linear systems is a computational bottleneck at large-scale. These tasks can be reduced to log-linear complexity by utilizing hierarchical off-diagonal low-rank (HODLR) matrix approximations. In this work, we show that a class of Hessians that arise from inverse problems governed by PDEs are well approximated by the HODLR matrix format. In particular, we study inverse problems governed by PDEs that model the instantaneous viscous flow of ice sheets. In these problems, we seek a spatially distributed basal sliding parameter field such that the flow predicted by the ice sheet model is consistent with ice sheet surface velocity observations. We demonstrate the use of HODLR Hessian approximation to efficiently sample the Laplace approximation of the posterior distribution with covariance further approximated by HODLR matrix compression. Computational studies are performed which illustrate ice sheet problem regimes for which the Gauss–Newton data-misfit Hessian is more efficiently approximated by the HODLR matrix format than the low-rank (LR) format. We then demonstrate that HODLR approximations can be favorable, when compared to global LR approximations, for large-scale problems by studying the data-misfit Hessian associated with inverse problems governed by the first-order Stokes flow model on the Humboldt glacier and Greenland ice sheet. 
    more » « less
  4. Summary

    Numerical methods are proposed for the nonlinear Stokes‐Biot system modeling interaction of a free fluid with a poroelastic structure. We discuss time discretization and decoupling schemes that allow the fluid and the poroelastic structure computed independently using a common stress force along the interface. The coupled system of nonlinear Stokes and Biot is formulated as a least‐squares problem with constraints, where the objective functional measures violation of some interface conditions. The local constraints, the Stokes and Biot models, are discretized in time using second‐order schemes. Computational algorithms for the least‐squares problems are discussed and numerical results are provided to compare the accuracy and efficiency of the algorithms.

     
    more » « less
  5. Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least-squares solutions. Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized least-squares problem. Here we summarize the Neumann network approach and show that it has a form compatible with the optimal reconstruction function for a given inverse problem. We also investigate an extension of the Neumann network that incorporates a more sample efficient patch-based regularization approach. 
    more » « less