 Award ID(s):
 1717100
 NSFPAR ID:
 10293314
 Date Published:
 Journal Name:
 ISSAC '21: Proceedings of the 2021 on International Symposium on Symbolic and Algebraic Computation
 Page Range / eLocation ID:
 241 to 247
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

We generalize Hermite interpolation with error correction, which is the methodology for multiplicity algebraic error correction codes, to Hermite interpolation of a rational function over a field K from function and function derivative values. We present an interpolation algorithm that can locate and correct <= E errors at distinct arguments y in K where at least one of the values or values of a derivative is incorrect. The upper bound E for the number of such y is input. Our algorithm sufficiently oversamples the rational function to guarantee a unique interpolant. We sample (f/g)^(j)(y[i]) for 0 <= j <= L[i], 1 <= i <= n, y[i] distinct, where (f/g)^(j) is the jth derivative of the rational function f/g, f, g in K[x], GCD(f,g)=1, g <= 0, and where N = (L[1]+1)+...+(L[n]+1) >= C + D + 1 + 2(L[1]+1) + ... + 2(L[E]+1) where C is an upper bound for deg(f) and D an upper bound for deg(g), which are input to our algorithm. The arguments y[i] can be poles, which is truly or falsely indicated by a function value infinity with the corresponding L[i]=0. Our results remain valid for fields K of characteristic >= 1 + max L[i]. Our algorithm has the same asymptotic arithmetic complexity as that for classical Hermite interpolation, namely softO(N). For polynomials, that is, g=1, and a uniform derivative profile L[1] = ... = L[n], our algorithm specializes to the univariate multiplicity code decoder that is based on the 1986 WelchBerlekamp algorithm.more » « less

Amir Hashemi (Ed.)We present Hermite polynomial interpolation algorithms that for a sparse univariate polynomial f with coefficients from a field compute the polynomial from fewer points than the classical algorithms. If the interpolating polynomial f has t terms, our algorithms, require argument/value triples (w^i, f(w^i), f'(w^i)) for i=0,...,t + ceiling( (t+1)/2 )  1, where w is randomly sampled and the probability of a correct output is determined from a degree bound for f. With f' we denote the derivative of f. Our algorithms generalize to multivariate polynomials, higher derivatives and sparsity with respect to Chebyshev polynomial bases. We have algorithms that can correct errors in the points by oversampling at a limited number of good values. If an upper bound B >= t for the number of terms is given, our algorithms use a randomly selected w and, with high probability, ceiling( t/2 ) + B triples, but then never return an incorrect output. The algorithms are based on Prony's sparse interpolation algorithm. While Prony's algorithm and its variants use fewer values, namely, 2t+1 and t+B values f(w^i), respectively, they need more arguments w^i. The situation mirrors that in algebraic error correcting codes, where the ReedSolomon code requires fewer values than the multiplicity code, which is based on Hermite interpolation, but the ReedSolomon code requires more distinct arguments. Our sparse Hermite interpolation algorithms can interpolate polynomials over finite fields and over the complex numbers, and from floating point data. Our Pronybased approach does not encounter the Birkhoff phenomenon of Hermite interpolation, when a gap in the derivative values causes multiple interpolants. We can interpolate from t+1 values of f and 2t1 values of f'.more » « less

Abstract As the use of spectral/
hp element methods, and highorder finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental highorder operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendous attention. With the expansion of the types of problems to which highorder methods are applied, and correspondingly the growth in types of numerical tasks accomplished through highorder methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hp element methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hp element libraryNektar++ by running a series of baseline evaluations against the ‘standard’ Lagrangian method, where an interpolation matrix is generated and matrixmultiplication applied to evaluate a point at a given location. We present results from a rigorous series of benchmarking tests for a variety of element shapes, polynomial orders and dimensions. We show that when the point of interest is to be repeatedly evaluated, the barycentric method performs at worst slower, when compared to a cached matrix evaluation. However, when the point of interest changes repeatedly so that the interpolation matrix must be regenerated in the ‘standard’ approach, the barycentric method yields far greater performance, with a minimum speedup factor of$$50\%$$ $50\%$ . Furthermore, when derivatives of the solution evaluation are also required, the barycentric method in general slightly outperforms the cached interpolation matrix method across all elements and orders, with an up to$$7\times $$ $7\times $ speedup. Finally we investigate a realworld example of scalar transport using a nonconformal discontinuous Galerkin simulation, in which we observe around$$30\%$$ $30\%$ speedup in computational time for the barycentric method compared to the matrixbased approach. We also explore the complexity of both interpolation methods and show that the barycentric interpolation method requires$$6\times $$ $6\times $ storage compared to a best case space complexity of$${\mathcal {O}}(k)$$ $O\left(k\right)$ for the Lagrangian interpolation matrix method.$${\mathcal {O}}(k^2)$$ $O\left({k}^{2}\right)$ 
Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. In this work, we show that there exists a deterministic algorithm for multivariate multipoint evaluation over any finite field F that outputs the evaluations of an mvariate polynomial of degree less than d in each variable at N points in time (dm + N)1+o(1) · poly(m, d, log F) for all m ∈ N and all sufficiently large d ∈ N. A previous work of Kedlaya and Umans (FOCS 2008, SICOMP 2011) achieved the same time complexity when the number of variables m is at most d^{o(1)} and had left the problem of removing this condition as an open problem. A recent work of Bhargava, Ghosh, Kumar and Mohapatra (STOC 2022) answered this question when the underlying field is not too large and has characteristic less than d^{o(1)}. In this work, we remove this constraint on the number of variables over all finite fields, thereby answering the question of Kedlaya and Umans over all finite fields. Our algorithm relies on a nontrivial combination of ideas from three seemingly different previously knownalgorithms for multivariate multipoint evaluation, namely the algorithms of Kedlaya and Umans, that of Björklund, Kaski and Williams (IPEC 2017, Algorithmica 2019), and that of Bhargava, Ghosh, Kumar and Mohapatra, together with a result of Bombieri and Vinogradov from analytic number theory about the distribution of primes in an arithmetic progression. We also present a second algorithm for multivariate multipoint evaluation that is completely elementary and in particular, avoids the use of the Bombieri–Vinogradov Theorem. However, it requires a mild assumption that the field size is bounded by an exponentialtower in d of bounded height.more » « less

Establishing the complexity of {\em Bounded Distance Decoding} for ReedSolomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NPhard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NPhardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for ReedSolomon codes of length $N$ and dimension $K=O(N)$, we show that it is NPhard to decode more than $ NK c\frac{\log N}{\log\log N}$ errors (with $c>0$ an absolute constant). Moreover, we show that the problem is NPhard under quasipolynomialtime reductions for an error amount $> NK c\log{N}$ (with $c>0$ an absolute constant). An alternative natural reformulation of the Bounded Distance Decoding problem for ReedSolomon codes is as a {\em Polynomial Reconstruction} problem. In this view, our results show that it is NPhard to decide whether there exists a degree $K$ polynomial passing through $K+ c\frac{\log N}{\log\log N}$ points from a given set of points $(a_1, b_1), (a_2, b_2)\ldots, (a_N, b_N)$. Furthermore, it is NPhard under quasipolynomialtime reductions to decide whether there is a degree $K$ polynomial passing through $K+c\log{N}$ many points. These results follow from the NPhardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the wellstudied ProuhetTarryEscott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the ProuhetTarryEscott problem deserves further study in the theoretical computer science community.more » « less