skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Riemann–Hilbert Approach to the Perturbation Theory for Orthogonal Polynomials: Applications to Numerical Linear Algebra and Random Matrix Theory
Abstract We establish a new perturbation theory for orthogonal polynomials using a Riemann–Hilbert approach and consider applications in numerical linear algebra and random matrix theory. This new approach shows that the orthogonal polynomials with respect to two measures can be effectively compared using the difference of their Stieltjes transforms on a suitably chosen contour. Moreover, when two measures are close and satisfy some regularity conditions, we use the theta functions of a hyperelliptic Riemann surface to derive explicit and accurate expansion formulae for the perturbed orthogonal polynomials. In contrast to other approaches, a key strength of the methodology is that estimates can remain valid as the degree of the polynomial grows. The results are applied to analyze several numerical algorithms from linear algebra, including the Lanczos tridiagonalization procedure, the Cholesky factorization, and the conjugate gradient algorithm. As a case study, we investigate these algorithms applied to a general spiked sample covariance matrix model by considering the eigenvector empirical spectral distribution and its limits. For the first time, we give precise estimates on the output of the algorithms, applied to this wide class of random matrices, as the number of iterations diverges. In this setting, beyond the first order expansion, we also derive a new mesoscopic central limit theorem for the associated orthogonal polynomials and other quantities relevant to numerical algorithms.  more » « less
Award ID(s):
1945652 2306439
PAR ID:
10443738
Author(s) / Creator(s):
;
Publisher / Repository:
Oxford academic
Date Published:
Journal Name:
International Mathematics Research Notices
Volume:
2024
Issue:
5
ISSN:
1073-7928
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We develop a numerical method for computing with orthogonal polynomials that are orthogonal on multiple, disjoint intervals for which analytical formulae are currently unknown. Our approach exploits the Fokas–Its–Kitaev Riemann–Hilbert representation of the orthogonal polynomials to produce an method to compute the firstNrecurrence coefficients. The method can also be used for pointwise evaluation of the polynomials and their Cauchy transforms throughout the complex plane. The method encodes the singularity behavior of weight functions using weighted Cauchy integrals of Chebyshev polynomials. This greatly improves the efficiency of the method, outperforming other available techniques. We demonstrate the fast convergence of our method and present applications to integrable systems and approximation theory. 
    more » « less
  2. Large matrices arise in many machine learning and data analysis applications, including as representations of datasets, graphs, model weights, and first and second-order derivatives. Randomized Numerical Linear Algebra (RandNLA) is an area which uses randomness to develop improved algorithms for ubiquitous matrix problems. The area has reached a certain level of maturity; but recent hardware trends, efforts to incorporate RandNLA algorithms into core numerical libraries, and advances in machine learning, statistics, and random matrix theory, have lead to new theoretical and practical challenges. This article provides a self-contained overview of RandNLA, in light of these developments. 
    more » « less
  3. We study the expected number of real zeros for random linear combinations of orthogonal polynomials. It is well known that Kac polynomials, spanned by monomials with i.i.d. Gaussian coefficients, have only $$(2/\pi + o(1))\log{n}$$ expected real zeros in terms of the degree $$n$$. If the basis is given by the orthonormal polynomials associated with a compactly supported Borel measure on the real line, or associated with a Freud weight, then random linear combinations have $$n/\sqrt{3} + o(n)$$ expected real zeros. We prove that the same asymptotic relation holds for all random orthogonal polynomials on the real line associated with a large class of weights, and give local results on the expected number of real zeros. We also show that the counting measures of properly scaled zeros of these random polynomials converge weakly to either the Ullman distribution or the arcsine distribution. 
    more » « less
  4. null (Ed.)
    We provide an exact analysis of a class of randomized algorithms for solving overdetermined least-squares problems. We consider first-order methods, where the gradients are pre-conditioned by an approximation of the Hessian, based on a subspace embedding of the data matrix. This class of algorithms encompasses several randomized methods among the fastest solvers for leastsquares problems. We focus on two classical embeddings, namely, Gaussian projections and subsampled randomized Hadamard transforms (SRHT). Our key technical innovation is the derivation of the limiting spectral density of SRHT embeddings. Leveraging this novel result, we derive the family of normalized orthogonal polynomials of the SRHT density and we find the optimal pre-conditioned first-order method along with its rate of convergence. Our analysis of Gaussian embeddings proceeds similarly, and leverages classical random matrix theory results. In particular, we show that for a given sketch size, SRHT embeddings exhibits a faster rate of convergence than Gaussian embeddings. Then, we propose a new algorithm by optimizing the computational complexity over the choice of the sketching dimension. To our knowledge, our resulting algorithm yields the best known complexity for solving least-squares problems with no condition number dependence. 
    more » « less
  5. We complete Dyson’s dream by cementing the links between symmetric spaces and classical random matrix ensembles. Previous work has focused on a one-to-one correspondence between symmetric spaces and many but not all of the classical random matrix ensembles. This work shows that we can completely capture all of the classical random matrix ensembles from Cartan’s symmetric spaces through the use of alternative coordinate systems. In the end, we have to let go of the notion of a one-to-one correspondence. We emphasize that the KAK decomposition traditionally favored by mathematicians is merely one coordinate system on the symmetric space, albeit a beautiful one. However, other matrix factorizations, especially the generalized singular value decomposition from numerical linear algebra, reveal themselves to be perfectly valid coordinate systems that one symmetric space can lead to many classical random matrix theories. We establish the connection between this numerical linear algebra viewpoint and the theory of generalized Cartan decompositions. This, in turn, allows us to produce yet more random matrix theories from a single symmetric space. Yet, again, these random matrix theories arise from matrix factorizations, though ones that we are not aware have appeared in the literature. 
    more » « less