We prove that for most entire functions f in the sense of category, a strong form of the BakerGammelWills Conjecture holds. More precisely, there is an infinite sequence S of positive integers n, such that given any r>0, and multipoint Padé approximants R_{n} to f with interpolation points in {z:z≤r}, {R_{n}}_{n∈S} converges locally uniformly to f in the plane. The sequence S does not depend on r, nor on the interpolation points. For entire functions with smooth rapidly decreasing coefficients, full diagonal sequences of multipoint Padé approximants converge.
more »
« less
On Uniform Convergence of Diagonal Multipoint Pad� Approximants For Entire Functions
We prove that for most entire functions f in the sense
of category, a strong form of the BakerGammelWills Conjecture
holds. More precisely, there is an inÖnite sequence S of positive
integers n, such that given any r > 0, and multipoint PadÈ approximants Rn to f with interpolation points in fz : jzj rg, fRngn2S
converges locally uniformly to f in the plane. The sequence S
does not depend on r, nor on the interpolation points. For entire
functions with smooth rapidly decreasing coe¢ cients, full diagonal
sequences of multipoint PadÈ approximants converge.
more »
« less
 Award ID(s):
 1800251
 NSFPAR ID:
 10091966
 Date Published:
 Journal Name:
 Constructive approximation
 Volume:
 49
 ISSN:
 14320940
 Page Range / eLocation ID:
 149174
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


We strengthen the classical approximation theorems of Weierstrass, Runge, and Mergelyan by showing the polynomial and rational approximants can be taken to have a simple geometric structure. In particular, when approximating a function $f$ on a compact set $K$, the critical points of our approximants may be taken to lie in any given domain containing $K$, and all the critical values in any given neighborhood of the polynomially convex hull of $f(K)$.more » « less

Amir Hashemi (Ed.)We present Hermite polynomial interpolation algorithms that for a sparse univariate polynomial f with coefficients from a field compute the polynomial from fewer points than the classical algorithms. If the interpolating polynomial f has t terms, our algorithms, require argument/value triples (w^i, f(w^i), f'(w^i)) for i=0,...,t + ceiling( (t+1)/2 )  1, where w is randomly sampled and the probability of a correct output is determined from a degree bound for f. With f' we denote the derivative of f. Our algorithms generalize to multivariate polynomials, higher derivatives and sparsity with respect to Chebyshev polynomial bases. We have algorithms that can correct errors in the points by oversampling at a limited number of good values. If an upper bound B >= t for the number of terms is given, our algorithms use a randomly selected w and, with high probability, ceiling( t/2 ) + B triples, but then never return an incorrect output. The algorithms are based on Prony's sparse interpolation algorithm. While Prony's algorithm and its variants use fewer values, namely, 2t+1 and t+B values f(w^i), respectively, they need more arguments w^i. The situation mirrors that in algebraic error correcting codes, where the ReedSolomon code requires fewer values than the multiplicity code, which is based on Hermite interpolation, but the ReedSolomon code requires more distinct arguments. Our sparse Hermite interpolation algorithms can interpolate polynomials over finite fields and over the complex numbers, and from floating point data. Our Pronybased approach does not encounter the Birkhoff phenomenon of Hermite interpolation, when a gap in the derivative values causes multiple interpolants. We can interpolate from t+1 values of f and 2t1 values of f'.more » « less

Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. In this work, we show that there exists a deterministic algorithm for multivariate multipoint evaluation over any finite field F that outputs the evaluations of an mvariate polynomial of degree less than d in each variable at N points in time (dm + N)1+o(1) · poly(m, d, log F) for all m ∈ N and all sufficiently large d ∈ N. A previous work of Kedlaya and Umans (FOCS 2008, SICOMP 2011) achieved the same time complexity when the number of variables m is at most d^{o(1)} and had left the problem of removing this condition as an open problem. A recent work of Bhargava, Ghosh, Kumar and Mohapatra (STOC 2022) answered this question when the underlying field is not too large and has characteristic less than d^{o(1)}. In this work, we remove this constraint on the number of variables over all finite fields, thereby answering the question of Kedlaya and Umans over all finite fields. Our algorithm relies on a nontrivial combination of ideas from three seemingly different previously knownalgorithms for multivariate multipoint evaluation, namely the algorithms of Kedlaya and Umans, that of Björklund, Kaski and Williams (IPEC 2017, Algorithmica 2019), and that of Bhargava, Ghosh, Kumar and Mohapatra, together with a result of Bombieri and Vinogradov from analytic number theory about the distribution of primes in an arithmetic progression. We also present a second algorithm for multivariate multipoint evaluation that is completely elementary and in particular, avoids the use of the Bombieri–Vinogradov Theorem. However, it requires a mild assumption that the field size is bounded by an exponentialtower in d of bounded height.more » « less

null (Ed.)We present approximation and exact algorithms for piecewise regression of univariate and bivariate data using fixeddegree polynomials. Specifically, given a set S of n data points (x1, y1), . . . , (xn, yn) ∈ Rd × R where d ∈ {1, 2}, the goal is to segment xi’s into some (arbitrary) number of disjoint pieces P1, . . . , Pk, where each piece Pj is associated with a fixeddegree polynomial fj : Rd → R, to minimize the total loss function λk+ni=1(yi −f(xi))2, where λ ≥ 0 is a regularization term that penalizes model complexity (number of pieces) and f : kj=1 Pj → R is the piecewise polynomial function defined as fPj = fj. The pieces P1,...,Pk are disjoint intervals of R in the case of univariate data and disjoint axisaligned rectangles in the case of bivariate data. Our error approximation allows use of any fixeddegree polynomial, not just linear functions. Our main results are the following. For univariate data, we present a (1 + ε)approximation algorithm with time complexity O(nε log1ε), assuming that data is presented in sorted order of xi’s. For bivariate data, we √ present three results: a subexponential exact algorithm with running time nO( n); a polynomialtime constant approximation algorithm; and a quasipolynomial time approximation scheme (QPTAS). The bivariate case is believed to be NPhard in the folklore but we could not find a published record in the literature, so in this paper we also present a hardness proof for completeness.more » « less