skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Expected Number of Real Zeros for Random Orthogonal Polynomials
We study the expected number of real zeros for random linear combinations of orthogonal polynomials. It is well known that Kac polynomials, spanned by monomials with i.i.d. Gaussian coefficients, have only $$(2/\pi + o(1))\log{n}$$ expected real zeros in terms of the degree $$n$$. If the basis is given by the orthonormal polynomials associated with a compactly supported Borel measure on the real line, or associated with a Freud weight, then random linear combinations have $$n/\sqrt{3} + o(n)$$ expected real zeros. We prove that the same asymptotic relation holds for all random orthogonal polynomials on the real line associated with a large class of weights, and give local results on the expected number of real zeros. We also show that the counting measures of properly scaled zeros of these random polynomials converge weakly to either the Ullman distribution or the arcsine distribution.  more » « less
Award ID(s):
1800251
PAR ID:
10092094
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Mathematical proceedings of the Cambridge Philosophical Society
Volume:
164
ISSN:
1469-8064
Page Range / eLocation ID:
47-66
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    We determine the asymptotics for the variance of the num-ber of zeros of random linear combinations of orthogonal polynomials ofdegreenin subintervals[a;b]of the support of the underlying orthog-onality measure. We show that, asn!1, this variance is asymptotictocn, for some explicit constantc 
    more » « less
  2. Daras, N.; Rassias, T. (Ed.)
    Abstract. Let fj g1 j=1 be a sequence of distinct positive numbers. Let w be a nonnegative function, integrable on the real line. One can form orthogonal Dirichlet polynomials fng from linear combinations of n 
    more » « less
  3. Abstract We establish a new perturbation theory for orthogonal polynomials using a Riemann–Hilbert approach and consider applications in numerical linear algebra and random matrix theory. This new approach shows that the orthogonal polynomials with respect to two measures can be effectively compared using the difference of their Stieltjes transforms on a suitably chosen contour. Moreover, when two measures are close and satisfy some regularity conditions, we use the theta functions of a hyperelliptic Riemann surface to derive explicit and accurate expansion formulae for the perturbed orthogonal polynomials. In contrast to other approaches, a key strength of the methodology is that estimates can remain valid as the degree of the polynomial grows. The results are applied to analyze several numerical algorithms from linear algebra, including the Lanczos tridiagonalization procedure, the Cholesky factorization, and the conjugate gradient algorithm. As a case study, we investigate these algorithms applied to a general spiked sample covariance matrix model by considering the eigenvector empirical spectral distribution and its limits. For the first time, we give precise estimates on the output of the algorithms, applied to this wide class of random matrices, as the number of iterations diverges. In this setting, beyond the first order expansion, we also derive a new mesoscopic central limit theorem for the associated orthogonal polynomials and other quantities relevant to numerical algorithms. 
    more » « less
  4. null (Ed.)
    In this paper we consider the following sparse recovery problem. We have query access to a vector 𝐱 ∈ ℝ^N such that x̂ = 𝐅 𝐱 is k-sparse (or nearly k-sparse) for some orthogonal transform 𝐅. The goal is to output an approximation (in an 𝓁₂ sense) to x̂ in sublinear time. This problem has been well-studied in the special case that 𝐅 is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time O(k ⋅ polylog N). However, for transforms 𝐅 other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms - running in time poly(k log(N)) - for solving the sparse recovery problem for orthogonal transforms 𝐅 that arise from orthogonal polynomials. More precisely, our algorithm works for any 𝐅 that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the k-sparse sparse recovery problem to the 1-sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials. Frequently, sparse FFT algorithms are described as implementing such a reduction; however, the technical details of such works are quite specific to the Fourier transform and moreover the actual implementations of these algorithms do not use the 1-sparse algorithm as a black box. In this work we give a reduction that works for a broad class of orthogonal polynomial families, and which uses any 1-sparse recovery algorithm as a black box. 
    more » « less
  5. null (Ed.)
    Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an $$n \times n$$ linear system $Ax=b$ is $$\tilde{O}(n^\omega)$$, where $$\omega < 2.372864$$ is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly$(n)$ condition number. In this paper, we present an algorithm that solves linear systems in sparse matrices asymptotically faster than matrix multiplication for any $$\omega > 2$$. This speedup holds for any input matrix $$A$$ with $$o(n^{\omega -1}/\log(\kappa(A)))$$ non-zeros, where $$\kappa(A)$$ is the condition number of $$A$$. For poly$(n)$-conditioned matrices with $$\tilde{O}(n)$$ nonzeros, and the current value of $$\omega$$, the bit complexity of our algorithm to solve to within any $$1/\text{poly}(n)$$ error is $$O(n^{2.331645})$$. Our algorithm can be viewed as an efficient, randomized implementation of the block Krylov method via recursive low displacement rank factorizations. It is inspired by the algorithm of [Eberly et al. ISSAC `06 `07] for inverting matrices over finite fields. In our analysis of numerical stability, we develop matrix anti-concentration techniques to bound the smallest eigenvalue and the smallest gap in eigenvalues of semi-random matrices. 
    more » « less