skip to main content


Title: Log orthogonal functions: approximation properties and applications
Abstract We present two new classes of orthogonal functions, log orthogonal functions and generalized log orthogonal functions, which are constructed by applying a $\log $ mapping to Laguerre polynomials. We develop basic approximation theory for these new orthogonal functions, and apply them to solve several typical fractional differential equations whose solutions exhibit weak singularities. Our error analysis and numerical results show that our methods based on the new orthogonal functions are particularly suitable for functions that have weak singularities at one endpoint and can lead to exponential convergence rate, as opposed to low algebraic rates if usual orthogonal polynomials are used.  more » « less
Award ID(s):
2012585
NSF-PAR ID:
10329288
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IMA Journal of Numerical Analysis
Volume:
42
Issue:
1
ISSN:
0272-4979
Page Range / eLocation ID:
712 to 743
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In this paper we consider the following sparse recovery problem. We have query access to a vector đ± ∈ ℝ^N such that x̂ = 𝐅 đ± is k-sparse (or nearly k-sparse) for some orthogonal transform 𝐅. The goal is to output an approximation (in an 𝓁₂ sense) to x̂ in sublinear time. This problem has been well-studied in the special case that 𝐅 is the Discrete Fourier Transform (DFT), and a long line of work has resulted in sparse Fast Fourier Transforms that run in time O(k ⋅ polylog N). However, for transforms 𝐅 other than the DFT (or closely related transforms like the Discrete Cosine Transform), the question is much less settled. In this paper we give sublinear-time algorithms - running in time poly(k log(N)) - for solving the sparse recovery problem for orthogonal transforms 𝐅 that arise from orthogonal polynomials. More precisely, our algorithm works for any 𝐅 that is an orthogonal polynomial transform derived from Jacobi polynomials. The Jacobi polynomials are a large class of classical orthogonal polynomials (and include Chebyshev and Legendre polynomials as special cases), and show up extensively in applications like numerical analysis and signal processing. One caveat of our work is that we require an assumption on the sparsity structure of the sparse vector, although we note that vectors with random support have this property with high probability. Our approach is to give a very general reduction from the k-sparse sparse recovery problem to the 1-sparse sparse recovery problem that holds for any flat orthogonal polynomial transform; then we solve this one-sparse recovery problem for transforms derived from Jacobi polynomials. Frequently, sparse FFT algorithms are described as implementing such a reduction; however, the technical details of such works are quite specific to the Fourier transform and moreover the actual implementations of these algorithms do not use the 1-sparse algorithm as a black box. In this work we give a reduction that works for a broad class of orthogonal polynomial families, and which uses any 1-sparse recovery algorithm as a black box. 
    more » « less
  2. Abstract We establish a new perturbation theory for orthogonal polynomials using a Riemann–Hilbert approach and consider applications in numerical linear algebra and random matrix theory. This new approach shows that the orthogonal polynomials with respect to two measures can be effectively compared using the difference of their Stieltjes transforms on a suitably chosen contour. Moreover, when two measures are close and satisfy some regularity conditions, we use the theta functions of a hyperelliptic Riemann surface to derive explicit and accurate expansion formulae for the perturbed orthogonal polynomials. In contrast to other approaches, a key strength of the methodology is that estimates can remain valid as the degree of the polynomial grows. The results are applied to analyze several numerical algorithms from linear algebra, including the Lanczos tridiagonalization procedure, the Cholesky factorization, and the conjugate gradient algorithm. As a case study, we investigate these algorithms applied to a general spiked sample covariance matrix model by considering the eigenvector empirical spectral distribution and its limits. For the first time, we give precise estimates on the output of the algorithms, applied to this wide class of random matrices, as the number of iterations diverges. In this setting, beyond the first order expansion, we also derive a new mesoscopic central limit theorem for the associated orthogonal polynomials and other quantities relevant to numerical algorithms. 
    more » « less
  3. Guruswami, Venkatesan (Ed.)
    Recent efforts in Analysis of Boolean Functions aim to extend core results to new spaces, including to the slice binom([n],k), the hypergrid [K]ⁿ, and noncommutative spaces (matrix algebras). We present here a new way to relate functions on the hypergrid (or products of cyclic groups) to their harmonic extensions over the polytorus. We show the supremum of a function f over products of the cyclic group {exp(2π i k/K)}_{k = 1}^K controls the supremum of f over the entire polytorus ({z ∈ ℂ:|z| = 1}ⁿ), with multiplicative constant C depending on K and deg(f) only. This Remez-type inequality appears to be the first such estimate that is dimension-free (i.e., C does not depend on n). This dimension-free Remez-type inequality removes the main technical barrier to giving 𝒪(log n) sample complexity, polytime algorithms for learning low-degree polynomials on the hypergrid and low-degree observables on level-K qudit systems. In particular, our dimension-free Remez inequality implies new Bohnenblust-Hille-type estimates which are central to the learning algorithms and appear unobtainable via standard techniques. Thus we extend to new spaces a recent line of work [Eskenazis and Ivanisvili, 2022; Huang et al., 2022; Volberg and Zhang, 2023] that gave similarly efficient methods for learning low-degree polynomials on the hypercube and observables on qubits. An additional product of these efforts is a new class of distributions over which arbitrary quantum observables are well-approximated by their low-degree truncations - a phenomenon that greatly extends the reach of low-degree learning in quantum science [Huang et al., 2022]. 
    more » « less
  4. Let f: {0, 1}n → {0, 1} be a boolean function, and let f∧(x, y) = f(x ∧ y) denote the AND-function of f, where x ∧ y denotes bit-wise AND. We study the deterministic communication complexity of f∧ and show that, up to a logn factor, it is bounded by a polynomial in the logarithm of the real rank of the communication matrix of f∧. This comes within a logn factor of establishing the log-rank conjecture for AND-functions with no assumptions on f. Our result stands in contrast with previous results on special cases of the log-rank conjecture, which needed significant restrictions on f such as monotonicity or low F2-degree. Our techniques can also be used to prove (within a logn factor) a lifting theorem for AND-functions, stating that the deterministic communication complexity of f∧ is polynomially related to the AND-decision tree complexity of f. The results rely on a new structural result regarding boolean functions f: {0, 1}n → {0, 1} with a sparse polynomial representation, which may be of independent interest. We show that if the polynomial computing f has few monomials then the set system of the monomials has a small hitting set, of size poly-logarithmic in its sparsity. We also establish extensions of this result to multi-linear polynomials f: {0, 1}n → with a larger range. 
    more » « less
  5. Abstract

    We develop a numerical method for computing with orthogonal polynomials that are orthogonal on multiple, disjoint intervals for which analytical formulae are currently unknown. Our approach exploits the Fokas–Its–Kitaev Riemann–Hilbert representation of the orthogonal polynomials to produce an method to compute the firstNrecurrence coefficients. The method can also be used for pointwise evaluation of the polynomials and their Cauchy transforms throughout the complex plane. The method encodes the singularity behavior of weight functions using weighted Cauchy integrals of Chebyshev polynomials. This greatly improves the efficiency of the method, outperforming other available techniques. We demonstrate the fast convergence of our method and present applications to integrable systems and approximation theory.

     
    more » « less