Abstract. We describe a Lanczos-based algorithm for approximating the product of a rational matrix function with a vector. This algorithm, which we call the Lanczos method for optimal rational matrix function approximation (Lanczos-OR), returns the optimal approximation from a given Krylov subspace in a norm depending on the rational function’s denominator, and it can be computed using the information from a slightly larger Krylov subspace. We also provide a low-memory implementation which only requires storing a number of vectors proportional to the denominator degree of the rational function. Finally, we show that Lanczos-OR can be used to derive algorithms for computing other matrix functions, including the matrix sign function and quadrature-based rational function approximations. In many cases, it improves on the approximation quality of prior approaches, including the standard Lanczos method, with little additional computational overhead.
more »
« less
A spectrum adaptive kernel polynomial method
The kernel polynomial method (KPM) is a powerful numerical method for approximating spectral densities. Typical implementations of the KPM require an a prior estimate for an interval containing the support of the target spectral density, and while such estimates can be obtained by classical techniques, this incurs addition computational costs. We propose a spectrum adaptive KPM based on the Lanczos algorithm without reorthogonalization, which allows the selection of KPM parameters to be deferred to after the expensive computation is finished. Theoretical results from numerical analysis are given to justify the suitability of the Lanczos algorithm for our approach, even in finite precision arithmetic. While conceptually simple, the paradigm of decoupling computation from approximation has a number of practical and pedagogical benefits, which we highlight with numerical examples.
more »
« less
- Award ID(s):
- 2045590
- PAR ID:
- 10496714
- Publisher / Repository:
- AIP Publishing
- Date Published:
- Journal Name:
- The Journal of Chemical Physics
- Volume:
- 159
- Issue:
- 11
- ISSN:
- 0021-9606
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The cumulative empirical spectral measure (CESM) $$\Phi[\mathbf{A}] : \mathbb{R} \to [0,1]$$ of a $$n\times n$$ symmetric matrix $$\mathbf{A}$$ is defined as the fraction of eigenvalues of $$\mathbf{A}$$ less than a given threshold, i.e., $$\Phi[\mathbf{A}](x) := \sum_{i=1}^{n} \frac{1}{n} {\large\unicode{x1D7D9}}[ \lambda_i[\mathbf{A}]\leq x]$$. Spectral sums $$\operatorname{tr}(f[\mathbf{A}])$$ can be computed as the Riemann–Stieltjes integral of $$f$$ against $$\Phi[\mathbf{A}]$$, so the task of estimating CESM arises frequently in a number of applications, including machine learning. We present an error analysis for stochastic Lanczos quadrature (SLQ). We show that SLQ obtains an approximation to the CESM within a Wasserstein distance of $$t \: | \lambda_{\text{max}}[\mathbf{A}] - \lambda_{\text{min}}[\mathbf{A}] |$$ with probability at least $$1-\eta$$, by applying the Lanczos algorithm for $$\lceil 12 t^{-1} + \frac{1}{2} \rceil$$ iterations to $$\lceil 4 ( n+2 )^{-1}t^{-2} \ln(2n\eta^{-1}) \rceil$$ vectors sampled independently and uniformly from the unit sphere. We additionally provide (matrix-dependent) a posteriori error bounds for the Wasserstein and Kolmogorov–Smirnov distances between the output of this algorithm and the true CESM. The quality of our bounds is demonstrated using numerical experiments.more » « less
-
Abstract We consider the problem of extracting a few desired eigenpairs of the buckling eigenvalue problem , whereKis symmetric positive semi‐definite,KGis symmetric indefinite, and the pencil is singular, namely,KandKGshare a nontrivial common nullspace. Moreover, in practical buckling analysis of structures, bases for the nullspace ofKand the common nullspace ofKandKGare available. There are two open issues for developing an industrial strength shift‐invert Lanczos method: (1) the shift‐invert operator does not exist or is extremely ill‐conditioned, and (2) the use of the semi‐inner product induced byKdrives the Lanczos vectors rapidly toward the nullspace ofK, which leads to a rapid growth of the Lanczos vectors in norms and causes permanent loss of information and the failure of the method. In this paper, we address these two issues by proposing a generalized buckling spectral transformation of the singular pencil and a regularization of the inner product via a low‐rank updating of the semi‐positive definiteness ofK. The efficacy of our approach is demonstrated by numerical examples, including one from industrial buckling analysis.more » « less
-
We combine data-driven reduced order models (ROM) with the Lippmann- Schwinger integral equation to produce a direct nonlinear inversion method. The ROM is viewed as a Galerkin projection and is sparse due to Lanczos orthogonalization. Embedding into the continuous problem, a data-driven internal solution is produced. This internal solution is then used in the Lippmann-Schwinger equation, in a direct or iterative framework. The new approach also allows us to process non-square matrix-valued data-transfer functions, i.e., to remove the main limitation of the earlier versions of the ROM based inversion algorithms. We show numerical experiments for spectral domain data for which our inversion is far superior to the Born inversion.more » « less
-
Approximating the action of a matrix function $$f(\vec{A})$$ on a vector $$\vec{b}$$ is an increasingly important primitive in machine learning, data science, and statistics, with applications such as sampling high dimensional Gaussians, Gaussian process regression and Bayesian inference, principle component analysis, and approximating Hessian spectral densities. Over the past decade, a number of algorithms enjoying strong theoretical guarantees have been proposed for this task. Many of the most successful belong to a family of algorithms called \emph{Krylov subspace methods}. Remarkably, a classic Krylov subspace method, called the Lanczos method for matrix functions (Lanczos-FA), frequently outperforms newer methods in practice. Our main result is a theoretical justification for this finding: we show that, for a natural class of \emph{rational functions}, Lanczos-FA matches the error of the best possible Krylov subspace method up to a multiplicative approximation factor. The approximation factor depends on the degree of $f(x)$'s denominator and the condition number of $$\vec{A}$$, but not on the number of iterations $$k$$. Our result provides a strong justification for the excellent performance of Lanczos-FA, especially on functions that are well approximated by rationals, such as the matrix square root.more » « less
An official website of the United States government

