Regularization matrices for discrete ill-posed problems in several space dimensions: Regularization matrices for discrete ill-posed problems in several space dimensions
- PAR ID:
- 10055011
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Numerical Linear Algebra with Applications
- Volume:
- 25
- Issue:
- 4
- ISSN:
- 1070-5325
- Page Range / eLocation ID:
- e2163
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract In this article, we exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component, and a computationally efficient numerical scheme that is suitable for large-scale problems. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling of the penalty functional is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise (too weak regularization) while a too large value would lead to an excessive penalization of the solution (too strong regularization). In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, so as to allow different regularization for different components of the unknown, or for groups of them. Distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the maximum a posteriori estimation, requires no statistical tools. A clever combination of numerical linear algebra and numerical optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available. Moreover, in the case of underdetermined problems, passing through the adjoint formulation in data space may lead to substantial reduction in computational complexity.more » « less
-
ABSTRACT Several iterative soft‐thresholding algorithms, such as FISTA, have been proposed in the literature for solving regularized linear discrete inverse problems that arise in various applications in science and engineering. These algorithms are easy to implement, but their rates of convergence may be slow. This paper describes novel approaches to reduce the computations required for each iteration by using Krylov subspace techniques. Specifically, we propose to impose sparsity on the coefficients in the representation of the computed solution in terms of a Krylov subspace basis. Several numerical examples from image deblurring and computerized tomography are used to illustrate the efficiency and accuracy of the proposed methods.more » « less
An official website of the United States government
