We prove that for most entire functions f in the sense of category, a strong form of the Baker-Gammel-Wills Conjecture holds. More precisely, there is an infinite sequence S of positive integers n, such that given any r>0, and multipoint Padé approximants R_{n} to f with interpolation points in {z:|z|≤r}, {R_{n}}_{n∈S} converges locally uniformly to f in the plane. The sequence S does not depend on r, nor on the interpolation points. For entire functions with smooth rapidly decreasing coefficients, full diagonal sequences of multipoint Padé approximants converge.
more »
« less
On Uniform Convergence of Diagonal Multipoint Pad� Approximants For Entire Functions
We prove that for most entire functions f in the sense of category, a strong form of the Baker-Gammel-Wills Conjecture holds. More precisely, there is an inÖnite sequence S of positive integers n, such that given any r > 0, and multipoint PadÈ approximants Rn to f with interpolation points in fz : jzj rg, fRngn2S converges locally uniformly to f in the plane. The sequence S does not depend on r, nor on the interpolation points. For entire functions with smooth rapidly decreasing coe¢ cients, full diagonal sequences of multipoint PadÈ approximants converge.
more »
« less
- Award ID(s):
- 1800251
- PAR ID:
- 10091966
- Date Published:
- Journal Name:
- Constructive approximation
- Volume:
- 49
- ISSN:
- 1432-0940
- Page Range / eLocation ID:
- 149-174
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We strengthen the classical approximation theorems of Weierstrass, Runge, and Mergelyan by showing the polynomial and rational approximants can be taken to have a simple geometric structure. In particular, when approximating a function $$f$$ on a compact set $$K$$, the critical points of our approximants may be taken to lie in any given domain containing $$K$$, and all the critical values in any given neighborhood of the polynomially convex hull of $f(K)$.more » « less
-
Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. In this work, we show that there exists a deterministic algorithm for multivariate multipoint evaluation over any finite field F that outputs the evaluations of an m-variate polynomial of degree less than d in each variable at N points in time (dm + N)1+o(1) · poly(m, d, log |F|) for all m ∈ N and all sufficiently large d ∈ N. A previous work of Kedlaya and Umans (FOCS 2008, SICOMP 2011) achieved the same time complexity when the number of variables m is at most d^{o(1)} and had left the problem of removing this condition as an open problem. A recent work of Bhargava, Ghosh, Kumar and Mohapatra (STOC 2022) answered this question when the underlying field is not too large and has characteristic less than d^{o(1)}. In this work, we remove this constraint on the number of variables over all finite fields, thereby answering the question of Kedlaya and Umans over all finite fields. Our algorithm relies on a non-trivial combination of ideas from three seemingly different previously knownalgorithms for multivariate multipoint evaluation, namely the algorithms of Kedlaya and Umans, that of Björklund, Kaski and Williams (IPEC 2017, Algorithmica 2019), and that of Bhargava, Ghosh, Kumar and Mohapatra, together with a result of Bombieri and Vinogradov from analytic number theory about the distribution of primes in an arithmetic progression. We also present a second algorithm for multivariate multipoint evaluation that is completely elementary and in particular, avoids the use of the Bombieri–Vinogradov Theorem. However, it requires a mild assumption that the field size is bounded by an exponential-tower in d of bounded height.more » « less
-
Many astrophysical applications require efficient yet reliable forecasts of stellar evolution tracks. One example is population synthesis, which generates forward predictions of models for comparison with observations. The majority of state-of-the-art rapid population synthesis methods are based on analytic fitting formulae to stellar evolution tracks that are computationally cheap to sample statistically over a continuous parameter range. The computational costs of running detailed stellar evolution codes, such as MESA, over wide and densely sampled parameter grids are prohibitive, while stellar-age based interpolation in-between sparsely sampled grid points leads to intolerably large systematic prediction errors. In this work, we provide two solutions for automated interpolation methods that offer satisfactory trade-off points between cost-efficiency and accuracy. We construct a timescale-adapted evolutionary coordinate and use it in a two-step interpolation scheme that traces the evolution of stars from zero age main sequence all the way to the end of core helium burning while covering a mass range from 0.65 to 300M⊙. The feedforward neural network regression model (first solution) that we train to predict stellar surface variables can make millions of predictions, sufficiently accurate over the entire parameter space, within tens of seconds on a 4-core CPU. The hierarchical nearest-neighbor interpolation algorithm (second solution) that we hard-code to the same end achieves even higher predictive accuracy, the same algorithm remains applicable to all stellar variables evolved over time, but it is two orders of magnitude slower. Our methodological framework is demonstrated to work on the MESA ISOCHRONES ANDSTELLARTRACKS(Choi et al. 2016) data set, but is independent of the input stellar catalog. Finally, we discuss the prospective applications of these methods and provide guidelines for generalizing them to higher dimensional parameter spaces.more » « less
-
Radial basis functions (RBFs) are prominent examples for reproducing kernels with associated reproducing kernel Hilbert spaces (RKHSs). The convergence theory for the kernel-based interpolation in that space is well understood and optimal rates for the whole RKHS are often known. Schaback added the doubling trick [Math. Comp. 68 (1999), pp. 201–216], which shows that functions having double the smoothness required by the RKHS (along with specific, albeit complicated boundary behavior) can be approximated with higher convergence rates than the optimal rates for the whole space. Other advances allowed interpolation of target functions which are less smooth, and different norms which measure interpolation error. The current state of the art of error analysis for RBF interpolation treats target functions having smoothness up to twice that of the native space, but error measured in norms which are weaker than that required for membership in the RKHS. Motivated by the fact that the kernels and the approximants they generate are smoother than required by the native space, this article extends the doubling trick to error which measures higher smoothness. This extension holds for a family of kernels satisfying easily checked hypotheses which we describe in this article, and includes many prominent RBFs. In the course of the proof, new convergence rates are obtained for the abstract operator considered by Devore and Ron in [Trans. Amer. Math. Soc. 362 (2010), pp. 6205–6229], and new Bernstein estimates are obtained relating high order smoothness norms to the native space norm.more » « less
An official website of the United States government

