skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, July 12 until 2:00 AM ET on Saturday, July 13 due to maintenance. We apologize for the inconvenience.


Title: On Uniform Convergence of Diagonal Multipoint Pad� Approximants For Entire Functions
We prove that for most entire functions f in the sense of category, a strong form of the Baker-Gammel-Wills Conjecture holds. More precisely, there is an inÖnite sequence S of positive integers n, such that given any r > 0, and multipoint PadÈ approximants Rn to f with interpolation points in fz : jzj  rg, fRngn2S converges locally uniformly to f in the plane. The sequence S does not depend on r, nor on the interpolation points. For entire functions with smooth rapidly decreasing coe¢ cients, full diagonal sequences of multipoint PadÈ approximants converge.  more » « less
Award ID(s):
1800251
NSF-PAR ID:
10091966
Author(s) / Creator(s):
Date Published:
Journal Name:
Constructive approximation
Volume:
49
ISSN:
1432-0940
Page Range / eLocation ID:
149-174
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We prove that for most entire functions f in the sense of category, a strong form of the Baker-Gammel-Wills Conjecture holds. More precisely, there is an infinite sequence S of positive integers n, such that given any r>0, and multipoint Padé approximants R_{n} to f with interpolation points in {z:|z|≤r}, {R_{n}}_{n∈S} converges locally uniformly to f in the plane. The sequence S does not depend on r, nor on the interpolation points. For entire functions with smooth rapidly decreasing coefficients, full diagonal sequences of multipoint Padé approximants converge. 
    more » « less
  2. We strengthen the classical approximation theorems of Weierstrass, Runge, and Mergelyan by showing the polynomial and rational approximants can be taken to have a simple geometric structure. In particular, when approximating a function $f$ on a compact set $K$, the critical points of our approximants may be taken to lie in any given domain containing $K$, and all the critical values in any given neighborhood of the polynomially convex hull of $f(K)$. 
    more » « less
  3. Amir Hashemi (Ed.)
    We present Hermite polynomial interpolation algorithms that for a sparse univariate polynomial f with coefficients from a field compute the polynomial from fewer points than the classical algorithms. If the interpolating polynomial f has t terms, our algorithms, require argument/value triples (w^i, f(w^i), f'(w^i)) for i=0,...,t + ceiling( (t+1)/2 ) - 1, where w is randomly sampled and the probability of a correct output is determined from a degree bound for f. With f' we denote the derivative of f. Our algorithms generalize to multivariate polynomials, higher derivatives and sparsity with respect to Chebyshev polynomial bases. We have algorithms that can correct errors in the points by oversampling at a limited number of good values. If an upper bound B >= t for the number of terms is given, our algorithms use a randomly selected w and, with high probability, ceiling( t/2 ) + B triples, but then never return an incorrect output. The algorithms are based on Prony's sparse interpolation algorithm. While Prony's algorithm and its variants use fewer values, namely, 2t+1 and t+B values f(w^i), respectively, they need more arguments w^i. The situation mirrors that in algebraic error correcting codes, where the Reed-Solomon code requires fewer values than the multiplicity code, which is based on Hermite interpolation, but the Reed-Solomon code requires more distinct arguments. Our sparse Hermite interpolation algorithms can interpolate polynomials over finite fields and over the complex numbers, and from floating point data. Our Prony-based approach does not encounter the Birkhoff phenomenon of Hermite interpolation, when a gap in the derivative values causes multiple interpolants. We can interpolate from t+1 values of f and 2t-1 values of f'. 
    more » « less
  4. Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. In this work, we show that there exists a deterministic algorithm for multivariate multipoint evaluation over any finite field F that outputs the evaluations of an m-variate polynomial of degree less than d in each variable at N points in time (dm + N)1+o(1) · poly(m, d, log |F|) for all m ∈ N and all sufficiently large d ∈ N. A previous work of Kedlaya and Umans (FOCS 2008, SICOMP 2011) achieved the same time complexity when the number of variables m is at most d^{o(1)} and had left the problem of removing this condition as an open problem. A recent work of Bhargava, Ghosh, Kumar and Mohapatra (STOC 2022) answered this question when the underlying field is not too large and has characteristic less than d^{o(1)}. In this work, we remove this constraint on the number of variables over all finite fields, thereby answering the question of Kedlaya and Umans over all finite fields. Our algorithm relies on a non-trivial combination of ideas from three seemingly different previously knownalgorithms for multivariate multipoint evaluation, namely the algorithms of Kedlaya and Umans, that of Björklund, Kaski and Williams (IPEC 2017, Algorithmica 2019), and that of Bhargava, Ghosh, Kumar and Mohapatra, together with a result of Bombieri and Vinogradov from analytic number theory about the distribution of primes in an arithmetic progression. We also present a second algorithm for multivariate multipoint evaluation that is completely elementary and in particular, avoids the use of the Bombieri–Vinogradov Theorem. However, it requires a mild assumption that the field size is bounded by an exponential-tower in d of bounded height. 
    more » « less
  5. null (Ed.)
    We present approximation and exact algorithms for piecewise regression of univariate and bivariate data using fixed-degree polynomials. Specifically, given a set S of n data points (x1, y1), . . . , (xn, yn) ∈ Rd × R where d ∈ {1, 2}, the goal is to segment xi’s into some (arbitrary) number of disjoint pieces P1, . . . , Pk, where each piece Pj is associated with a fixed-degree polynomial fj : Rd → R, to minimize the total loss function λk+􏰄ni=1(yi −f(xi))2, where λ ≥ 0 is a regularization term that penalizes model complexity (number of pieces) and f : 􏰇kj=1 Pj → R is the piecewise polynomial function defined as f|Pj = fj. The pieces P1,...,Pk are disjoint intervals of R in the case of univariate data and disjoint axis-aligned rectangles in the case of bivariate data. Our error approximation allows use of any fixed-degree polynomial, not just linear functions. Our main results are the following. For univariate data, we present a (1 + ε)-approximation algorithm with time complexity O(nε log1ε), assuming that data is presented in sorted order of xi’s. For bivariate data, we √ present three results: a sub-exponential exact algorithm with running time nO( n); a polynomial-time constant- approximation algorithm; and a quasi-polynomial time approximation scheme (QPTAS). The bivariate case is believed to be NP-hard in the folklore but we could not find a published record in the literature, so in this paper we also present a hardness proof for completeness. 
    more » « less