skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Decoding Multivariate Multiplicity Codes on Product Sets
The multiplicity Schwartz-Zippel lemma bounds the total multiplicity of zeroes of a multivariate polynomial on a product set. This lemma motivates the multiplicity codes of Kopparty, Saraf and Yekhanin [J. ACM, 2014], who showed how to use this lemma to construct high-rate locally-decodable codes. However, the algorithmic results about these codes crucially rely on the fact that the polynomials are evaluated on a vector space and not an arbitrary product set. In this work, we show how to decode multivariate multiplicity codes of large multiplicities in polynomial time over finite product sets (over fields of large characteristic and zero characteristic). Previously such decoding algorithms were not known even for a positive fraction of errors. In contrast, our work goes all the way to the distance of the code and in particular exceeds both the unique-decoding bound and the Johnson radius. For errors exceeding the Johnson radius, even combinatorial list-decodablity of these codes was not known. Our algorithm is an application of the classical polynomial method directly to the multivariate setting. In particular, we do not rely on a reduction from the multivariate to the univariate case as is typical of many of the existing results on decoding codes based on multivariate polynomials. However, a vanilla application of the polynomial method in the multivariate setting does not yield a polynomial upper bound on the list size. We obtain a polynomial bound on the list size by taking an alternative view of multivariate multiplicity codes. In this view, we glue all the partial derivatives of the same order together using a fresh set z of variables. We then apply the polynomial method by viewing this as a problem over the field F(z) of rational functions in z .  more » « less
Award ID(s):
2152413
PAR ID:
10574539
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
IEEE Press
Date Published:
Journal Name:
IEEE Transactions on Information Theory
Volume:
70
Issue:
1
ISSN:
0018-9448
Page Range / eLocation ID:
154 to 169
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this work, we present an abstract framework for some algebraic error-correcting codes with the aim of capturing codes that are list-decodable to capacity, along with their decoding algorithms. In the polynomial ideal framework, a code is specified by some ideals in a polynomial ring, messages are polynomials and the encoding of a message polynomial is the collection of residues of that polynomial modulo the ideals. We present an alternate way of viewing this class of codes in terms of linear operators, and show that this alternate view makes their algorithmic list-decodability amenable to analysis. Our framework leads to a new class of codes that we call affine Folded Reed-Solomon codes (which are themselves a special case of the broader class we explore). These codes are common generalizations of the well-studied Folded Reed-Solomon codes and Univariate Multiplicity codes as well as the less-studied Additive Folded Reed-Solomon codes, and lead to a large family of codes that were not previously known/studied. More significantly our framework also captures the algorithmic list-decodability of the constituent codes. Specifically, we present a unified view of the decoding algorithm for ideal-theoretic codes and show that the decodability reduces to the analysis of the distance of some related codes. We show that a good bound on this distance leads to a capacity-achieving performance of the underlying code, providing a unifying explanation of known capacity-achieving results. In the specific case of affine Folded Reed-Solomon codes, our framework shows that they are efficiently list-decodable up to capacity (for appropriate setting of the parameters), thereby unifying the previous results for Folded Reed-Solomon, Multiplicity and Additive Folded Reed-Solomon codes. 
    more » « less
  2. null (Ed.)
    Multiplicity code decoders are based on Hermite polynomial interpolation with error correction. In order to have a unique Hermite interpolant one assumes that the field of scalars has characteristic 0 or >= k+1, where k is the maximum order of the derivatives in the list of values of the polynomial and its derivatives which are interpolated. For scalar fields of characteristic k+1, the minimum number of values for interpolating a polynomial of degree <= D is D+1+2E(k+1) when <= E of the values are erroneous. Here we give an error-correcting Hermite interpolation algorithm that can tolerate more errors, assuming that the characteristic of the scalar field is either 0 or >= D+1. Our algorithm requires (k+1)D + 1 - (k+1)k/2 + 2E values. As an example, we consider k = 2. If the error ratio (number of errors)/(number of evaluations) <= 0.16, our new algorithm requires ceiling( (4+7/17) D - (1+8 /17) ) values, while multiplicity decoding requires 25D+25 values. If the error ratio is <= 0.2, our algorithm requires 5D-2 evaluations over characteristic 0 or >= D+1, while multiplicity decoding for an error ratio 0.2 over fields of characteristic 3 is not possible for D >= 3. Our algorithm is based on Reed-Solomon interpolation without multiplicities, which becomes possible for Hermite interpolation because of the high redundancy necessary for error-correction. 
    more » « less
  3. Establishing the complexity of {\em Bounded Distance Decoding} for Reed-Solomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NP-hard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NP-hardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for Reed-Solomon codes of length $$N$$ and dimension $K=O(N)$, we show that it is NP-hard to decode more than $$ N-K- c\frac{\log N}{\log\log N}$$ errors (with $c>0$ an absolute constant). Moreover, we show that the problem is NP-hard under quasipolynomial-time reductions for an error amount $$> N-K- c\log{N}$$ (with $c>0$$ an absolute constant). An alternative natural reformulation of the Bounded Distance Decoding problem for Reed-Solomon codes is as a {\em Polynomial Reconstruction} problem. In this view, our results show that it is NP-hard to decide whether there exists a degree $$K$$ polynomial passing through $$K+ c\frac{\log N}{\log\log N}$$ points from a given set of points $$(a_1, b_1), (a_2, b_2)\ldots, (a_N, b_N)$$. Furthermore, it is NP-hard under quasipolynomial-time reductions to decide whether there is a degree $$K$$ polynomial passing through $$K+c\log{N}$$ many points. These results follow from the NP-hardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the well-studied Prouhet-Tarry-Escott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the Prouhet-Tarry-Escott problem deserves further study in the theoretical computer science community. 
    more » « less
  4. In this work, we show that the class of multivariate degree-d polynomials mapping {0,1}n to any Abelian group G is locally correctable with O˜d((logn)d) queries for up to a fraction of errors approaching half the minimum distance of the underlying code. In particular, this result holds even for polynomials over the reals or the rationals, special cases that were previously not known. Further, we show that they are locally list correctable up to a fraction of errors approaching the minimum distance of the code. These results build on and extend the prior work of the authors [ABPSS24] (STOC 2024) who considered the case of linear polynomials and gave analogous results. Low-degree polynomials over the Boolean cube {0,1}n arise naturally in Boolean circuit complexity and learning theory, and our work furthers the study of their coding-theoretic properties. Extending the results of [ABPSS24] from linear to higher-degree polynomials involves several new challenges and handling them gives us further insights into properties of low-degree polynomials over the Boolean cube. For local correction, we construct a set of points in the Boolean cube that lie between two exponentially close parallel hyperplanes and is moreover an interpolating set for degree-d polynomials. To show that the class of degree-d polynomials is list decodable up to the minimum distance, we stitch together results on anti-concentration of low-degree polynomials, the Sunflower lemma, and the Footprint bound for counting common zeroes of polynomials. Analyzing the local list corrector of [ABPSS24] for higher degree polynomials involves understanding random restrictions of non-zero degree-d polynomials on a Hamming slice. In particular, we show that a simple random restriction process for reducing the dimension of the Boolean cube is a suitably good sampler for Hamming slices. 
    more » « less
  5. Convolutional codes have been widely studied and used in many systems. As the number of memory elements increases, frame error rate (FER) improves but computational complexity increases exponentially. Recently, decoders that achieve reduced average complexity through list decoding have been demonstrated when the convolutional encoder polynomials share a common factor that can be understood as a CRC or more generally an expurgating linear function (ELF). However, classical convolutional codes avoid such common factors because they result in a catastrophic encoder. This paper provides a way to access the complexity reduction possible with list decoding even when the convolutional encoder polynomials do not share a common factor. Decomposing the original code into component encoders that fully exclude some polynomials can allow an ELF to be factored from each component. Dual list decoding of the component encoders can often find the ML codeword. Including a fallback to regular Viterbi decoding yields excellent FER performance while requiring less average complexity than always performing Viterbi on the original trellis. A best effort dual list decoder that avoids Viterbi has performance similar to the ML decoder. Component encoders that have a shared polynomial allow for even greater complexity reduction. 
    more » « less