This paper concerns the theory and development of inexact rational Krylov subspace methods for approximating the action of a function of a matrix f(A) to a column vector b. At each step of the rational Krylov subspace methods, a shifted linear system of equations needs to be solved to enlarge the subspace. For large-scale problems, such a linear system is usually solved approximately by an iterative method. The main question is how to relax the accuracy of these linear solves without negatively affecting the convergence of the approximation of f(A)b. Our insight into this issue is obtained by exploring the residual bounds for the rational Krylov subspace approximations of f(A)b, based on the decaying behavior of the entries in the first column of certain matrices of A restricted to the rational Krylov subspaces. The decay bounds for these entries for both analytic functions and Markov functions can be efficiently and accurately evaluated by appropriate quadrature rules. A heuristic based on these bounds is proposed to relax the tolerances of the linear solves arising in each step of the rational Krylov subspace methods. As the algorithm progresses toward convergence, the linear solves can be performed with increasingly lower accuracy and computational cost. Numerical experiments for large nonsymmetric matrices show the effectiveness of the tolerance relaxation strategy for the inexact linear solves of rational Krylov subspace methods.
more »
« less
Inexact rational Krylov subspace method for eigenvalue problems
An inexact rational Krylov subspace method is studied to solve large-scale nonsymmetric eigenvalue problems. Each iteration (outer step) of the rational Krylov subspace method requires solution to a shifted linear system to enlarge the subspace, performed by an iterative linear solver for large-scale problems. Errors are introduced at each outer step if these linear systems are solved approx- imately by iterative methods (inner step), and they accumulate in the rational Krylov subspace. In this article, we derive an upper bound on the errors intro- duced at each outer step to maintain the same convergence as exact rational Krylov subspace method for approximating an invariant subspace. Since this bound is inversely proportional to the current eigenresidual norm of the target invariant subspace, the tolerance of iterative linear solves at each outer step can be relaxed with the outer iteration progress. A restarted variant of the inexact rational Krylov subspace method is also proposed. Numerical experiments show the effectiveness of relaxing the inner tolerance to save computational cost.
more »
« less
- PAR ID:
- 10344800
- Editor(s):
- Ye, Qiang
- Date Published:
- Journal Name:
- Numerical Linear Algebra with Applications
- ISSN:
- 1070-5325
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Bregman-type iterative methods have received considerable attention in recent years due to their ease of implementation and the high quality of the computed solutions they deliver. However, these iterative methods may require a large number of iterations and this reduces their usefulness. This paper develops a computationally attractive linearized Bregman algorithm by projecting the problem to be solved into an appropriately chosen low-dimensional Krylov subspace. The projection reduces the computational effort required for each iteration. A variant of this solution method, in which nonnegativity of each computed iterate is imposed, also is described. Extensive numerical examples illustrate the performance of the proposed methods.more » « less
-
Quantum linear system algorithms (QLSAs) have the potential to speed up algorithms that rely on solving linear systems. Interior point methods (IPMs) yield a fundamental family of polynomial-time algorithms for solving optimization problems. IPMs solve a Newton linear system at each iteration to compute the search direction; thus, QLSAs can potentially speed up IPMs. Due to the noise in contemporary quantum computers, quantum-assisted IPMs (QIPMs) only admit an inexact solution to the Newton linear system. Typically, an inexact search direction leads to an infeasible solution, so, to overcome this, we propose an inexact-feasible QIPM (IF-QIPM) for solving linearly constrained quadratic optimization problems. We also apply the algorithm to ℓ1-norm soft margin support vector machine (SVM) problems, and demonstrate that our algorithm enjoys a speedup in the dimension over existing approaches. This complexity bound is better than any existing classical or quantum algorithm that produces a classical solution.more » « less
-
null (Ed.)Abstract Randomized methods can be competitive for the solution of problems with a large matrix of low rank. They also have been applied successfully to the solution of large-scale linear discrete ill-posed problems by Tikhonov regularization (Xiang and Zou in Inverse Probl 29:085008, 2013). This entails the computation of an approximation of a partial singular value decomposition of a large matrix A that is of numerical low rank. The present paper compares a randomized method to a Krylov subspace method based on Golub–Kahan bidiagonalization with respect to accuracy and computing time and discusses characteristics of linear discrete ill-posed problems that make them well suited for solution by a randomized method.more » « less
-
Gradient coding is a method for mitigating straggling servers in a centralized computing network that uses erasure-coding techniques to distributively carry out first-order optimization methods. Randomized numerical linear algebra uses randomization to develop improved algorithms for large-scale linear algebra computations. In this paper, we propose a method for distributed optimization that combines gradient coding and randomized numerical linear algebra. The proposed method uses a randomized ℓ2 -subspace embedding and a gradient coding technique to distribute blocks of data to the computational nodes of a centralized network, and at each iteration the central server only requires a small number of computations to obtain the steepest descent update. The novelty of our approach is that the data is replicated according to importance scores, called block leverage scores, in contrast to most gradient coding approaches that uniformly replicate the data blocks. Furthermore, we do not require a decoding step at each iteration, avoiding a bottleneck in previous gradient coding schemes. We show that our approach results in a valid ℓ2 -subspace embedding, and that our resulting approximation converges to the optimal solution.more » « less