Abstract. We describe a Lanczos-based algorithm for approximating the product of a rational matrix function with a vector. This algorithm, which we call the Lanczos method for optimal rational matrix function approximation (Lanczos-OR), returns the optimal approximation from a given Krylov subspace in a norm depending on the rational function’s denominator, and it can be computed using the information from a slightly larger Krylov subspace. We also provide a low-memory implementation which only requires storing a number of vectors proportional to the denominator degree of the rational function. Finally, we show that Lanczos-OR can be used to derive algorithms for computing other matrix functions, including the matrix sign function and quadrature-based rational function approximations. In many cases, it improves on the approximation quality of prior approaches, including the standard Lanczos method, with little additional computational overhead.
more »
« less
Performance of several Lanczos eigensolvers with HISQ fermions
We investigate the state-of-the-art Lanczos eigensolvers available in the Grid and QUDA libraries. They include Implicitly Restarted Lanczos, Thick-Restart Lanczos, and Block Lanczos. We measure and analyze their performance for the Highly Improved Staggered Quark (HISQ) Dirac operator. We also discuss optimization of Chebyshev acceleration.
more »
« less
- Award ID(s):
- 2013064
- PAR ID:
- 10340513
- Date Published:
- Journal Name:
- Pos proceedings of science
- Volume:
- 053
- ISSN:
- 1824-8039
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)The power method is commonly applied to compute the Perron vector of large adjacency matrices. Blondel et al. [SIAM Rev. 46, 2004] investigated its performance when the adjacency matrix has multiple eigenvalues of the same magnitude. It is well known that the Lanczos method typically requires fewer iterations than the power method to determine eigenvectors with the desired accuracy. However, the Lanczos method demands more computer storage, which may make it impractical to apply to very large problems. The present paper adapts the analysis by Blondel et al. to the Lanczos and restarted Lanczos methods. The restarted methods are found to yield fast convergence and to require less computer storage than the Lanczos method. Computed examples illustrate the theory presented. Applications of the Arnoldi method are also discussed.more » « less
-
A<sc>bstract</sc> We study Krylov complexity in various models of quantum field theory: free massive bosons and fermions on flat space and on spheres, holographic models, and lattice models with a UV-cutoff. In certain cases, we observe asymptotic behavior in Lanczos coefficients that extends beyond the previously observed universality. We confirm that, in all cases, the exponential growth of Krylov complexity satisfies the conjectured inequality, which generalizes the Maldacena-Shenker-Stanford bound on chaos. We discuss the temperature dependence of Lanczos coefficients and note that the relationship between the growth of Lanczos coefficients and chaos may only hold for the sufficiently late, truly asymptotic regime, governed by physics at the UV cutoff. Contrary to previous suggestions, we demonstrate scenarios in which Krylov complexity in quantum field theory behaves qualitatively differently from holographic complexity.more » « less
-
Approximating the action of a matrix function $$f(\vec{A})$$ on a vector $$\vec{b}$$ is an increasingly important primitive in machine learning, data science, and statistics, with applications such as sampling high dimensional Gaussians, Gaussian process regression and Bayesian inference, principle component analysis, and approximating Hessian spectral densities. Over the past decade, a number of algorithms enjoying strong theoretical guarantees have been proposed for this task. Many of the most successful belong to a family of algorithms called \emph{Krylov subspace methods}. Remarkably, a classic Krylov subspace method, called the Lanczos method for matrix functions (Lanczos-FA), frequently outperforms newer methods in practice. Our main result is a theoretical justification for this finding: we show that, for a natural class of \emph{rational functions}, Lanczos-FA matches the error of the best possible Krylov subspace method up to a multiplicative approximation factor. The approximation factor depends on the degree of $f(x)$'s denominator and the condition number of $$\vec{A}$$, but not on the number of iterations $$k$$. Our result provides a strong justification for the excellent performance of Lanczos-FA, especially on functions that are well approximated by rationals, such as the matrix square root.more » « less
-
One of the most compelling features of Gaussian process (GP) regression is its ability to provide well-calibrated posterior distributions. Recent ad- vances in inducing point methods have sped up GP marginal likelihood and posterior mean computations, leaving posterior covariance estimation and sampling as the remaining computational bottlenecks. In this paper we address these shortcom- ings by using the Lanczos algorithm to rapidly ap- proximate the predictive covariance matrix. Our approach, which we refer to as LOVE (LanczOs Variance Estimates), substantially improves time and space complexity. In our experiments, LOVE computes covariances up to 2,000 times faster and draws samples 18,000 times faster than existing methods, all without sacrificing accuracy.more » « less
An official website of the United States government

