This paper presents a regularization framework that aims to improve the fidelity of Tikhonov inverse solutions. At the heart of the framework is the data-informed regularization idea that only data-uninformed parameters need to be regularized, while the data-informed parameters, on which data and forward model are integrated, should remain untouched. We propose to employ the active subspace method to determine the data-informativeness of a parameter. The resulting framework is thus called a data-informed (DI) active subspace (DIAS) regularization. Four proposed DIAS variants are rigorously analyzed, shown to be robust with the regularization parameter and capable of avoiding polluting solution features informed by the data. They are thus well suited for problems with small or reasonably small noise corruptions in the data. Furthermore, the DIAS approaches can effectively reuse any Tikhonov regularization codes/libraries. Though they are readily applicable for nonlinear inverse problems, we focus on linear problems in this paper in order to gain insights into the framework. Various numerical results for linear inverse problems are presented to verify theoretical findings and to demonstrate advantages of the DIAS framework over the Tikhonov, truncated SVD, and the TSVD-based DI approaches.
more »
« less
Distributed Tikhonov regularization for ill-posed inverse problems from a Bayesian perspective
Abstract In this article, we exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component, and a computationally efficient numerical scheme that is suitable for large-scale problems. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling of the penalty functional is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise (too weak regularization) while a too large value would lead to an excessive penalization of the solution (too strong regularization). In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, so as to allow different regularization for different components of the unknown, or for groups of them. Distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the maximum a posteriori estimation, requires no statistical tools. A clever combination of numerical linear algebra and numerical optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available. Moreover, in the case of underdetermined problems, passing through the adjoint formulation in data space may lead to substantial reduction in computational complexity.
more »
« less
- Award ID(s):
- 2204618
- PAR ID:
- 10577503
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Computational Optimization and Applications
- Volume:
- 91
- Issue:
- 2
- ISSN:
- 0926-6003
- Format(s):
- Medium: X Size: p. 541-572
- Size(s):
- p. 541-572
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Abstract Discrete ill-posed inverse problems arise in various areas of science and engineering. The presence of noise in the data often makes it difficult to compute an accurate approximate solution. To reduce the sensitivity of the computed solution to the noise, one replaces the original problem by a nearby well-posed minimization problem, whose solution is less sensitive to the noise in the data than the solution of the original problem. This replacement is known as regularization. We consider the situation when the minimization problem consists of a fidelity term, that is defined in terms of a p -norm, and a regularization term, that is defined in terms of a q -norm. We allow 0 < p , q ≤ 2. The relative importance of the fidelity and regularization terms is determined by a regularization parameter. This paper develops an automatic strategy for determining the regularization parameter for these minimization problems. The proposed approach is based on a new application of generalized cross validation. Computed examples illustrate the performance of the method proposed.more » « less
-
l1 regularization is used to preserve edges or enforce sparsity in a solution to an inverse problem. We investigate the split Bregman and the majorization-minimization iterative methods that turn this nonsmooth minimization problem into a sequence of steps that include solving an -regularized minimization problem. We consider selecting the regularization parameter in the inner generalized Tikhonov regularization problems that occur at each iteration in these iterative methods. The generalized cross validation method and chi2 degrees of freedom test are extended to these inner problems. In particular, for the chi2 test this includes extending the result for problems in which the regularization operator has more rows than columns and showing how to use the -weighted generalized inverse to estimate prior information at each inner iteration. Numerical experiments for image deblurring problems demonstrate that it is more effective to select the regularization parameter automatically within the iterative schemes than to keep it fixed for all iterations. Moreover, an appropriate regularization parameter can be estimated in the early iterations and fixed to convergence.more » « less
-
Deep Learning (DL), in particular deep neural networks (DNN), by default is purely data-driven and in general does not require physics. This is the strength of DL but also one of its key limitations when applied to science and engineering problems in which underlying physical properties—such as stability, conservation, and positivity—and accuracy are required. DL methods in their original forms are not capable of respecting the underlying mathematical models or achieving desired accuracy even in big-data regimes. On the other hand, many data-driven science and engineering problems, such as inverse problems, typically have limited experimental or observational data, and DL would overfit the data in this case. Leveraging information encoded in the underlying mathematical models, we argue, not only compensates missing information in low data regimes but also provides opportunities to equip DL methods with the underlying physics, and hence promoting better generalization. This paper develops a model-constrained deep learning approach and its variant TNet—a Tikhonov neural network—that are capable of learning not only information hidden in the training data but also in the underlying mathematical models to solve inverse problems governed by partial differential equations in low data regimes. We provide the constructions and some theoretical results for the proposed approaches for both linear and nonlinear inverse problems. Since TNet is designed to learn inverse solution with Tikhonov regularization, it is interpretable: in fact it recovers Tikhonov solutions for linear cases while potentially approximating Tikhonov solutions in any desired accuracy for nonlinear inverse problems. We also prove that data randomization can enhance not only the smoothness of the networks but also their generalizations. Comprehensive numerical results confirm the theoretical findings and show that with even as little as 1 training data sample for 1D deconvolution, 5 for inverse 2D heat conductivity problem, 100 for inverse initial conditions for time-dependent 2D Burgers’ equation, and 50 for inverse initial conditions for 2D Navier-Stokes equations, TNet solutions can be as accurate as Tikhonov solutions while being several orders of magnitude faster. This is possible owing to the model-constrained term, replications, and randomization.more » « less
-
We study an inverse problem for the time-dependent Maxwell system in an inhomogeneous and anisotropic medium. The objective is to recover the initial electric field $$\mathbf{E}_0$$ in a bounded domain $$\Omega \subset \mathbb{R}^3$$, using boundary measurements of the electric field and its normal derivative over a finite time interval. Informed by practical constraints, we adopt an under-determined formulation of Maxwell's equations that avoids the need for initial magnetic field data and charge density information. To address this inverse problem, we develop a time-dimension reduction approach by projecting the electric field onto a finite-dimensional Legendre polynomial-exponential basis in time. This reformulates the original space-time problem into a sequence of spatial systems for the projection coefficients. The reconstruction is carried out using the quasi-reversibility method within a minimum-norm framework, which accommodates the inherent non-uniqueness of the under-determined setting. We prove a convergence theorem that ensures the quasi-reversibility solution approximates the true solution as the noise and regularization parameters vanish. Numerical experiments in a fully three-dimensional setting validate the method's performance. The reconstructed initial electric field remains accurate even with $$10\%$$ noise in the data, demonstrating the robustness and applicability of the proposed approach to realistic inverse electromagnetic problems.more » « less
An official website of the United States government
