skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Log orthogonal functions: approximation properties and applications
Abstract We present two new classes of orthogonal functions, log orthogonal functions and generalized log orthogonal functions, which are constructed by applying a $$\log $$ mapping to Laguerre polynomials. We develop basic approximation theory for these new orthogonal functions, and apply them to solve several typical fractional differential equations whose solutions exhibit weak singularities. Our error analysis and numerical results show that our methods based on the new orthogonal functions are particularly suitable for functions that have weak singularities at one endpoint and can lead to exponential convergence rate, as opposed to low algebraic rates if usual orthogonal polynomials are used.  more » « less
Award ID(s):
2012585
PAR ID:
10329288
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IMA Journal of Numerical Analysis
Volume:
42
Issue:
1
ISSN:
0272-4979
Page Range / eLocation ID:
712 to 743
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract We develop a numerical method for computing with orthogonal polynomials that are orthogonal on multiple, disjoint intervals for which analytical formulae are currently unknown. Our approach exploits the Fokas–Its–Kitaev Riemann–Hilbert representation of the orthogonal polynomials to produce an method to compute the firstNrecurrence coefficients. The method can also be used for pointwise evaluation of the polynomials and their Cauchy transforms throughout the complex plane. The method encodes the singularity behavior of weight functions using weighted Cauchy integrals of Chebyshev polynomials. This greatly improves the efficiency of the method, outperforming other available techniques. We demonstrate the fast convergence of our method and present applications to integrable systems and approximation theory. 
    more » « less
  2. Abstract We establish a new perturbation theory for orthogonal polynomials using a Riemann–Hilbert approach and consider applications in numerical linear algebra and random matrix theory. This new approach shows that the orthogonal polynomials with respect to two measures can be effectively compared using the difference of their Stieltjes transforms on a suitably chosen contour. Moreover, when two measures are close and satisfy some regularity conditions, we use the theta functions of a hyperelliptic Riemann surface to derive explicit and accurate expansion formulae for the perturbed orthogonal polynomials. In contrast to other approaches, a key strength of the methodology is that estimates can remain valid as the degree of the polynomial grows. The results are applied to analyze several numerical algorithms from linear algebra, including the Lanczos tridiagonalization procedure, the Cholesky factorization, and the conjugate gradient algorithm. As a case study, we investigate these algorithms applied to a general spiked sample covariance matrix model by considering the eigenvector empirical spectral distribution and its limits. For the first time, we give precise estimates on the output of the algorithms, applied to this wide class of random matrices, as the number of iterations diverges. In this setting, beyond the first order expansion, we also derive a new mesoscopic central limit theorem for the associated orthogonal polynomials and other quantities relevant to numerical algorithms. 
    more » « less
  3. null (Ed.)
    In this paper we consider the following sparse recovery problem. We have query access to a vector 𝐱 ∈ ℝ^N such that x̂ = 𝐅 𝐱 is k-sparse (or nearly k-sparse) for some orthogonal transform 𝐅. The goal is to output an approximation (in an 𝓁₂ sense) to x̂ in sublinear time. This problem has been well-studied in the special case that 𝐅 is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time O(k ⋅ polylog N). However, for transforms 𝐅 other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms - running in time poly(k log(N)) - for solving the sparse recovery problem for orthogonal transforms 𝐅 that arise from orthogonal polynomials. More precisely, our algorithm works for any 𝐅 that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the k-sparse sparse recovery problem to the 1-sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials. Frequently, sparse FFT algorithms are described as implementing such a reduction; however, the technical details of such works are quite specific to the Fourier transform and moreover the actual implementations of these algorithms do not use the 1-sparse algorithm as a black box. In this work we give a reduction that works for a broad class of orthogonal polynomial families, and which uses any 1-sparse recovery algorithm as a black box. 
    more » « less
  4. We study the expected number of real zeros for random linear combinations of orthogonal polynomials. It is well known that Kac polynomials, spanned by monomials with i.i.d. Gaussian coefficients, have only $$(2/\pi + o(1))\log{n}$$ expected real zeros in terms of the degree $$n$$. If the basis is given by the orthonormal polynomials associated with a compactly supported Borel measure on the real line, or associated with a Freud weight, then random linear combinations have $$n/\sqrt{3} + o(n)$$ expected real zeros. We prove that the same asymptotic relation holds for all random orthogonal polynomials on the real line associated with a large class of weights, and give local results on the expected number of real zeros. We also show that the counting measures of properly scaled zeros of these random polynomials converge weakly to either the Ullman distribution or the arcsine distribution. 
    more » « less
  5. Abstract Grothendieck polynomials were introduced by Lascoux and Schützenberger and play an important role in K-theoretic Schubert calculus. In this paper, we give a new definition of double stable Grothendieck polynomials based on an iterated residue operation. We illustrate the power of our definition by calculating the Grothendieck expansion of K-theoretic Thom polynomials of $${\mathcal {A}}_{2}$$ singularities. We present this expansion in two versions: one displays its stabilization property, while the other displays its expected finiteness property. 
    more » « less