skip to main content


Title: Hermite Interpolation With Error Correction: Fields of Zero or Large Characteristic and Large Error Rate
Multiplicity code decoders are based on Hermite polynomial interpolation with error correction. In order to have a unique Hermite interpolant one assumes that the field of scalars has characteristic 0 or >= k+1, where k is the maximum order of the derivatives in the list of values of the polynomial and its derivatives which are interpolated. For scalar fields of characteristic k+1, the minimum number of values for interpolating a polynomial of degree <= D is D+1+2E(k+1) when <= E of the values are erroneous. Here we give an error-correcting Hermite interpolation algorithm that can tolerate more errors, assuming that the characteristic of the scalar field is either 0 or >= D+1. Our algorithm requires (k+1)D + 1 - (k+1)k/2 + 2E values. As an example, we consider k = 2. If the error ratio (number of errors)/(number of evaluations) <= 0.16, our new algorithm requires ceiling( (4+7/17) D - (1+8 /17) ) values, while multiplicity decoding requires 25D+25 values. If the error ratio is <= 0.2, our algorithm requires 5D-2 evaluations over characteristic 0 or >= D+1, while multiplicity decoding for an error ratio 0.2 over fields of characteristic 3 is not possible for D >= 3. Our algorithm is based on Reed-Solomon interpolation without multiplicities, which becomes possible for Hermite interpolation because of the high redundancy necessary for error-correction.  more » « less
Award ID(s):
1717100
NSF-PAR ID:
10293314
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ISSAC '21: Proceedings of the 2021 on International Symposium on Symbolic and Algebraic Computation
Page Range / eLocation ID:
241 to 247
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We generalize Hermite interpolation with error correction, which is the methodology for multiplicity algebraic error correction codes, to Hermite interpolation of a rational function over a field K from function and function derivative values. We present an interpolation algorithm that can locate and correct <= E errors at distinct arguments y in K where at least one of the values or values of a derivative is incorrect. The upper bound E for the number of such y is input. Our algorithm sufficiently oversamples the rational function to guarantee a unique interpolant. We sample (f/g)^(j)(y[i]) for 0 <= j <= L[i], 1 <= i <= n, y[i] distinct, where (f/g)^(j) is the j-th derivative of the rational function f/g, f, g in K[x], GCD(f,g)=1, g <= 0, and where N = (L[1]+1)+...+(L[n]+1) >= C + D + 1 + 2(L[1]+1) + ... + 2(L[E]+1) where C is an upper bound for deg(f) and D an upper bound for deg(g), which are input to our algorithm. The arguments y[i] can be poles, which is truly or falsely indicated by a function value infinity with the corresponding L[i]=0. Our results remain valid for fields K of characteristic >= 1 + max L[i]. Our algorithm has the same asymptotic arithmetic complexity as that for classical Hermite interpolation, namely soft-O(N). For polynomials, that is, g=1, and a uniform derivative profile L[1] = ... = L[n], our algorithm specializes to the univariate multiplicity code decoder that is based on the 1986 Welch-Berlekamp algorithm. 
    more » « less
  2. Amir Hashemi (Ed.)
    We present Hermite polynomial interpolation algorithms that for a sparse univariate polynomial f with coefficients from a field compute the polynomial from fewer points than the classical algorithms. If the interpolating polynomial f has t terms, our algorithms, require argument/value triples (w^i, f(w^i), f'(w^i)) for i=0,...,t + ceiling( (t+1)/2 ) - 1, where w is randomly sampled and the probability of a correct output is determined from a degree bound for f. With f' we denote the derivative of f. Our algorithms generalize to multivariate polynomials, higher derivatives and sparsity with respect to Chebyshev polynomial bases. We have algorithms that can correct errors in the points by oversampling at a limited number of good values. If an upper bound B >= t for the number of terms is given, our algorithms use a randomly selected w and, with high probability, ceiling( t/2 ) + B triples, but then never return an incorrect output. The algorithms are based on Prony's sparse interpolation algorithm. While Prony's algorithm and its variants use fewer values, namely, 2t+1 and t+B values f(w^i), respectively, they need more arguments w^i. The situation mirrors that in algebraic error correcting codes, where the Reed-Solomon code requires fewer values than the multiplicity code, which is based on Hermite interpolation, but the Reed-Solomon code requires more distinct arguments. Our sparse Hermite interpolation algorithms can interpolate polynomials over finite fields and over the complex numbers, and from floating point data. Our Prony-based approach does not encounter the Birkhoff phenomenon of Hermite interpolation, when a gap in the derivative values causes multiple interpolants. We can interpolate from t+1 values of f and 2t-1 values of f'. 
    more » « less
  3. Abstract

    As the use of spectral/hpelement methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendous attention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hpelement methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hpelement libraryNektar++by running a series of baseline evaluations against the ‘standard’ Lagrangian method, where an interpolation matrix is generated and matrix-multiplication applied to evaluate a point at a given location. We present results from a rigorous series of benchmarking tests for a variety of element shapes, polynomial orders and dimensions. We show that when the point of interest is to be repeatedly evaluated, the barycentric method performs at worst$$50\%$$50%slower, when compared to a cached matrix evaluation. However, when the point of interest changes repeatedly so that the interpolation matrix must be regenerated in the ‘standard’ approach, the barycentric method yields far greater performance, with a minimum speedup factor of$$7\times $$7×. Furthermore, when derivatives of the solution evaluation are also required, the barycentric method in general slightly outperforms the cached interpolation matrix method across all elements and orders, with an up to$$30\%$$30%speedup. Finally we investigate a real-world example of scalar transport using a non-conformal discontinuous Galerkin simulation, in which we observe around$$6\times $$6×speedup in computational time for the barycentric method compared to the matrix-based approach. We also explore the complexity of both interpolation methods and show that the barycentric interpolation method requires$${\mathcal {O}}(k)$$O(k)storage compared to a best case space complexity of$${\mathcal {O}}(k^2)$$O(k2)for the Lagrangian interpolation matrix method.

     
    more » « less
  4. Establishing the complexity of {\em Bounded Distance Decoding} for Reed-Solomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NP-hard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NP-hardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for Reed-Solomon codes of length $N$ and dimension $K=O(N)$, we show that it is NP-hard to decode more than $ N-K- c\frac{\log N}{\log\log N}$ errors (with $c>0$ an absolute constant). Moreover, we show that the problem is NP-hard under quasipolynomial-time reductions for an error amount $> N-K- c\log{N}$ (with $c>0$ an absolute constant). An alternative natural reformulation of the Bounded Distance Decoding problem for Reed-Solomon codes is as a {\em Polynomial Reconstruction} problem. In this view, our results show that it is NP-hard to decide whether there exists a degree $K$ polynomial passing through $K+ c\frac{\log N}{\log\log N}$ points from a given set of points $(a_1, b_1), (a_2, b_2)\ldots, (a_N, b_N)$. Furthermore, it is NP-hard under quasipolynomial-time reductions to decide whether there is a degree $K$ polynomial passing through $K+c\log{N}$ many points. These results follow from the NP-hardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the well-studied Prouhet-Tarry-Escott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the Prouhet-Tarry-Escott problem deserves further study in the theoretical computer science community. 
    more » « less
  5. Multivariate multipoint evaluation is the problem of evaluating a multivariate polynomial, given as a coefficient vector, simultaneously at multiple evaluation points. In this work, we show that there exists a deterministic algorithm for multivariate multipoint evaluation over any finite field F that outputs the evaluations of an m-variate polynomial of degree less than d in each variable at N points in time (dm + N)1+o(1) · poly(m, d, log |F|) for all m ∈ N and all sufficiently large d ∈ N. A previous work of Kedlaya and Umans (FOCS 2008, SICOMP 2011) achieved the same time complexity when the number of variables m is at most d^{o(1)} and had left the problem of removing this condition as an open problem. A recent work of Bhargava, Ghosh, Kumar and Mohapatra (STOC 2022) answered this question when the underlying field is not too large and has characteristic less than d^{o(1)}. In this work, we remove this constraint on the number of variables over all finite fields, thereby answering the question of Kedlaya and Umans over all finite fields. Our algorithm relies on a non-trivial combination of ideas from three seemingly different previously knownalgorithms for multivariate multipoint evaluation, namely the algorithms of Kedlaya and Umans, that of Björklund, Kaski and Williams (IPEC 2017, Algorithmica 2019), and that of Bhargava, Ghosh, Kumar and Mohapatra, together with a result of Bombieri and Vinogradov from analytic number theory about the distribution of primes in an arithmetic progression. We also present a second algorithm for multivariate multipoint evaluation that is completely elementary and in particular, avoids the use of the Bombieri–Vinogradov Theorem. However, it requires a mild assumption that the field size is bounded by an exponential-tower in d of bounded height. 
    more » « less