skip to main content

This content will become publicly available on March 8, 2023

Title: Inexact rational Krylov subspace method for eigenvalue problems
An inexact rational Krylov subspace method is studied to solve large-scale nonsymmetric eigenvalue problems. Each iteration (outer step) of the rational Krylov subspace method requires solution to a shifted linear system to enlarge the subspace, performed by an iterative linear solver for large-scale problems. Errors are introduced at each outer step if these linear systems are solved approx- imately by iterative methods (inner step), and they accumulate in the rational Krylov subspace. In this article, we derive an upper bound on the errors intro- duced at each outer step to maintain the same convergence as exact rational Krylov subspace method for approximating an invariant subspace. Since this bound is inversely proportional to the current eigenresidual norm of the target invariant subspace, the tolerance of iterative linear solves at each outer step can be relaxed with the outer iteration progress. A restarted variant of the inexact rational Krylov subspace method is also proposed. Numerical experiments show the effectiveness of relaxing the inner tolerance to save computational cost.
Authors:
;
Editors:
Ye, Qiang
Award ID(s):
2111496
Publication Date:
NSF-PAR ID:
10344800
Journal Name:
Numerical Linear Algebra with Applications
ISSN:
1070-5325
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Bregman-type iterative methods have received considerable attention in recent years due to their ease of implementation and the high quality of the computed solutions they deliver. However, these iterative methods may require a large number of iterations and this reduces their usefulness. This paper develops a computationally attractive linearized Bregman algorithm by projecting the problem to be solved into an appropriately chosen low-dimensional Krylov subspace. The projection reduces the computational effort required for each iteration. A variant of this solution method, in which nonnegativity of each computed iterate is imposed, also is described. Extensive numerical examples illustrate the performancemore »of the proposed methods.« less
  2. Electrical Impedance Tomography (EIT) is a well-known imaging technique for detecting the electrical properties of an object in order to detect anomalies, such as conductive or resistive targets. More specifically, EIT has many applications in medical imaging for the detection and location of bodily tumors since it is an affordable and non-invasive method, which aims to recover the internal conductivity of a body using voltage measurements resulting from applying low frequency current at electrodes placed at its surface. Mathematically, the reconstruction of the internal conductivity is a severely ill-posed inverse problem and yields a poor quality image reconstruction. To remedymore »this difficulty, at least in part, we regularize and solve the nonlinear minimization problem by the aid of a Krylov subspace-type method for the linear sub problem during each iteration. In EIT, a tumor or general anomaly can be modeled as a piecewise constant perturbation of a smooth background, hence, we solve the regularized problem on a subspace of relatively small dimension by the Flexible Golub-Kahan process that provides solutions that have sparse representation. For comparison, we use a well-known modified Gauss–Newton algorithm as a benchmark. Using simulations, we demonstrate the effectiveness of the proposed method. The obtained reconstructions indicate that the Krylov subspace method is better adapted to solve the ill-posed EIT problem and results in higher resolution images and faster convergence compared to reconstructions using the modified Gauss–Newton algorithm.« less
  3. Abstract Randomized methods can be competitive for the solution of problems with a large matrix of low rank. They also have been applied successfully to the solution of large-scale linear discrete ill-posed problems by Tikhonov regularization (Xiang and Zou in Inverse Probl 29:085008, 2013). This entails the computation of an approximation of a partial singular value decomposition of a large matrix A that is of numerical low rank. The present paper compares a randomized method to a Krylov subspace method based on Golub–Kahan bidiagonalization with respect to accuracy and computing time and discusses characteristics of linear discrete ill-posed problems thatmore »make them well suited for solution by a randomized method.« less
  4. In this paper, we aim at solving the Biot model under stabilized finite element discretizations. To solve the resulting generalized saddle point linear systems, some iterative methods are proposed and compared. In the first method, we apply the GMRES algorithm as the outer iteration. In the second method, the Uzawa method with variable relaxation parameters is employed as the outer iteration method. In the third approach, Uzawa method is treated as a fixed-point iteration, the outer solver is the so-called Anderson acceleration. In all these methods, the inner solvers are preconditioners for the generalized saddle point problem. In the preconditioners,more »the Schur complement approximation is derived by using Fourier analysis approach. These preconditioners are implemented exactly or inexactly. Extensive experiments are given to justify the performance of the proposed preconditioners and to compare all the algorithms.« less
  5. This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues. Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machinemore »learning and in scientific computing.« less