We consider extrapolation of the Arnoldi algorithm to accelerate computation of the dominant eigenvalue/eigenvector pair. The basic algorithm uses sequences of Krylov vectors to form a small eigenproblem which is solved exactly. The two dominant eigenvectors output from consecutive Arnoldi steps are then recombined to form an extrapolated iterate, and this accelerated iterate is used to restart the next Arnoldi process. We present numerical results testing the algorithm on a variety of cases and find on most examples it substantially improves the performance of restarted Arnoldi. The extrapolation is a simple post-processing step which has minimal computational cost.
more »
« less
Extrapolating the Arnoldi Algorithm to Improve Eigenvector Convergence
We consider extrapolation of the Arnoldi algorithm to accelerate computation of the dominant eigenvalue/eigenvector pair. The basic algorithm uses sequences of Krylov vectors to form a small eigenproblem which is solved exactly. The two dominant eigenvectors output from consecutive Arnoldi steps are then recombined to form an extrapolated iterate, and this accelerated iterate is used to restart the next Arnoldi process. We present numerical results testing the algorithm on a variety of cases and find on most examples it substantially improves the performance of restarted Arnoldi. The extrapolation is a simple post-processing step which has minimal computational cost.
more »
« less
- Award ID(s):
- 1852876
- PAR ID:
- 10303755
- Date Published:
- Journal Name:
- International journal of numerical analysis and modeling
- Volume:
- 18
- Issue:
- 5
- ISSN:
- 2617-8710
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Stochastic Approximation (SA) is a widely used algorithmic approach in various fields, including optimization and reinforcement learning (RL). Among RL algorithms, Q-learning is particularly popular due to its empirical success. In this paper, we study asynchronous Q-learning with constant stepsize, which is commonly used in practice for its fast convergence. By connecting the constant stepsize Q-learning to a time-homogeneous Markov chain, we show the distributional convergence of the iterates in Wasserstein distance and establish its exponential convergence rate. We also establish a Central Limit Theory for Q-learning iterates, demonstrating the asymptotic normality of the averaged iterates. Moreover, we provide an explicit expansion of the asymptotic bias of the averaged iterate in stepsize. Specifically, the bias is proportional to the stepsize up to higher-order terms and we provide an explicit expression for the linear coefficient. This precise characterization of the bias allows the application of Richardson-Romberg (RR) extrapolation technique to construct a new estimate that is provably closer to the optimal Q function. Numerical results corroborate our theoretical finding on the improvement of the RR extrapolation method.more » « less
-
We study the problem of finding the maximum of a function defined on the nodes of a connected graph. The goal is to identify a node where the function obtains its maximum. We focus on local iterative algorithms, which traverse the nodes of the graph along a path, and the next iterate is chosen from the neighbors of the current iterate with probability distribution determined by the function values at the current iterate and its neighbors. We study two algorithms corresponding to a Metropolis-Hastings random walk with different transition kernels: (i) The first algorithm is an exponentially weighted random walk governed by a parameter gamma. (ii) The second algorithm is defined with respect to the graph Laplacian and a smoothness parameter k. We derive convergence rates for the two algorithms in terms of total variation distance and hitting times. We also provide simulations showing the relative convergence rates of our algorithms in comparison to an unbiased random walk, as a function of the smoothness of the graph function. Our algorithms may be categorized as a new class of “descent-based” methods for function maximization on the nodes of a graph.more » « less
-
Representing real-time data as a sum of complex exponentials provides a compact form that enables both denoising and extrapolation. As a fully data-driven method, the Estimation of Signal Parameters via Rotational Invariance Techniques (ESPRIT) algorithm is agnostic to the underlying physical equations, making it broadly applicable to various observables and experimental or numerical setups. In this work, we consider applications of the ESPRIT algorithm primarily to extend real-time dynamical data from simulations of quantum systems. We evaluate ESPRIT's performance in the presence of noise and compare it to other extrapolation methods. We demonstrate its ability to extract information from short-time dynamics to reliably predict long-time behavior and determine the minimum time interval required for accurate results. We discuss how this insight can be leveraged in numerical methods that propagate quantum systems in time, and show how ESPRIT can predict infinite-time values of dynamical observables, offering a purely data-driven approach to characterizing quantum phases.more » « less
-
null (Ed.)In this paper we study the smooth convex-concave saddle point problem. Specifically, we analyze the last iterate convergence properties of the Extragradient (EG) algorithm. It is well known that the ergodic (averaged) iterates of EG converge at a rate of O(1/T) (Nemirovski, 2004). In this paper, we show that the last iterate of EG converges at a rate of O(1/T‾‾√). To the best of our knowledge, this is the first paper to provide a convergence rate guarantee for the last iterate of EG for the smooth convex-concave saddle point problem. Moreover, we show that this rate is tight by proving a lower bound of Ω(1/T‾‾√) for the last iterate. This lower bound therefore shows a quadratic separation of the convergence rates of ergodic and last iterates in smooth convex-concave saddle point problems.more » « less
An official website of the United States government

