skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Paired autoencoders for likelihood-free estimation in inverse problems
Abstract We consider the solution of nonlinear inverse problems where the forward problem is a discretization of a partial differential equation. Such problems are notoriously difficult to solve in practice and require minimizing a combination of a data-fit term and a regularization term. The main computational bottleneck of typical algorithms is the direct estimation of the data misfit. Therefore, likelihood-free approaches have become appealing alternatives. Nonetheless, difficulties in generalization and limitations in accuracy have hindered their broader utility and applicability. In this work, we use a paired autoencoder framework as a likelihood-free estimator (LFE) for inverse problems. We show that the use of such an architecture allows us to construct a solution efficiently and to overcome some known open problems when using LFEs. In particular, our framework can assess the quality of the solution and improve on it if needed. We demonstrate the viability of our approach using examples from full waveform inversion and inverse electromagnetic imaging.  more » « less
Award ID(s):
2152661
PAR ID:
10558465
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IOP Publishing
Date Published:
Journal Name:
Machine Learning: Science and Technology
Volume:
5
Issue:
4
ISSN:
2632-2153
Format(s):
Medium: X Size: Article No. 045055
Size(s):
Article No. 045055
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms. 
    more » « less
  2. We consider hyper-differential sensitivity analysis (HDSA) of nonlinear Bayesian inverse problems governed by partialdifferential equations (PDEs) with infinite-dimensional parameters. In previous works, HDSA has been used to assessthe sensitivity of the solution of deterministic inverse problems to additional model uncertainties and also different types of measurement data. In the present work, we extend HDSA to the class of Bayesian inverse problems governed by PDEs. The focus is on assessing the sensitivity of certain key quantities derived from the posterior distribution. Specifically, we focus on analyzing the sensitivity of the MAP point and the Bayes risk and make full use of the information embedded in the Bayesian inverse problem. After establishing our mathematical framework for HDSA of Bayesian inverse problems, we present a detailed computational approach for computing the proposed HDSA indices. We examine the effectiveness of the proposed approach on an inverse problem governed by a PDE modeling heat conduction. 
    more » « less
  3. N/A (Ed.)
    Abstract Partial differential equation (PDE)-constrained inverse problems are some of the most challenging and computationally demanding problems in computational science today. Fine meshes required to accurately compute the PDE solution introduce an enormous number of parameters and require large-scale computing resources such as more processors and more memory to solve such systems in a reasonable time. For inverse problems constrained by time-dependent PDEs, the adjoint method often employed to compute gradients and higher order derivatives efficiently requires solving a time-reversed, so-called adjoint PDE that depends on the forward PDE solution at each timestep. This necessitates the storage of a high-dimensional forward solution vector at every timestep. Such a procedure quickly exhausts the available memory resources. Several approaches that trade additional computation for reduced memory footprint have been proposed to mitigate the memory bottleneck, including checkpointing and compression strategies. In this work, we propose a close-to-ideal scalable compression approach using autoencoders to eliminate the need for checkpointing and substantial memory storage, thereby reducing the time-to-solution and memory requirements. We compare our approach with checkpointing and an off-the-shelf compression approach on an earth-scale ill-posed seismic inverse problem. The results verify the expected close-to-ideal speedup for the gradient and Hessian-vector product using the proposed autoencoder compression approach. To highlight the usefulness of the proposed approach, we combine the autoencoder compression with the data-informed active subspace (DIAS) prior showing how the DIAS method can be affordably extended to large-scale problems without the need for checkpointing and large memory. 
    more » « less
  4. Abstract In this paper, we introduce a bilevel optimization framework for addressing inverse mean-field games, alongside an exploration of numerical methods tailored for this bilevel problem. The primary benefit of our bilevel formulation lies in maintaining the convexity of the objective function and the linearity of constraints in the forward problem. Our paper focuses on inverse mean-field games characterized by unknown obstacles and metrics. We show numerical stability for these two types of inverse problems. More importantly, we, for the first time, establish the identifiability of the inverse mean-field game with unknown obstacles via the solution of the resultant bilevel problem. The bilevel approach enables us to employ an alternating gradient-based optimization algorithm with a provable convergence guarantee. To validate the effectiveness of our methods in solving the inverse problems, we have designed comprehensive numerical experiments, providing empirical evidence of its efficacy. 
    more » « less
  5. Abstract Predicting the future contribution of the ice sheets to sea level rise over the next decades presents several challenges due to a poor understanding of critical boundary conditions, such as basal sliding. Traditional numerical models often rely on data assimilation methods to infer spatially variable friction coefficients by solving an inverse problem, given an empirical friction law. However, these approaches are not versatile, as they sometimes demand extensive code development efforts when integrating new physics into the model. Furthermore, this approach makes it difficult to handle sparse data effectively. To tackle these challenges, we use the Physics‐Informed Neural Networks (PINNs) to seamlessly integrate observational data and governing equations of ice flow into a unified loss function, facilitating the solution of both forward and inverse problems within the same framework. We illustrate the versatility of this approach by applying the framework to two‐dimensional problems on the Helheim Glacier in southeast Greenland. By systematically concealing one variable (e.g., ice speed, ice thickness, etc.), we demonstrate the ability of PINNs to accurately reconstruct hidden information. Furthermore, we extend this application to address a challenging mixed inversion problem. We show how PINNs are capable of inferring the basal friction coefficient while simultaneously filling gaps in the sparsely observed ice thickness. This unified framework offers a promising avenue to enhance the predictive capabilities of ice sheet models, reducing uncertainties, and advancing our understanding of poorly constrained physical processes. 
    more » « less