skip to main content


Title: Bounds on the List Size of Successive Cancellation List Decoding
Successive cancellation list decoding of polar codes provides very good performance for short to moderate block lengths. However, the list size required to approach the performance of maximum-likelihood decoding is still not well understood theoretically. This work identifies information-theoretic quantities that are closely related to this required list size. It also provides a natural approximation for these quantities that can be computed efficiently even for very long codes. Simulation results are provided for the binary erasure channel as well as the binary-input additive white Gaussian noise channel.  more » « less
Award ID(s):
1718494
PAR ID:
10303734
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2020 International Conference on Signal Processing and Communications (SPCOM)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. List-decodability of Reed-Solomon codes has received a lot of attention, but the best-possible dependence between the parameters is still not well-understood. In this work, we focus on the case where the list-decoding radius is of the form r=1−ε for ε tending to zero. Our main result states that there exist Reed-Solomon codes with rate Ω(ε) which are (1−ε,O(1/ε)) -list-decodable, meaning that any Hamming ball of radius 1−ε contains at most O(1/ε) codewords. This trade-off between rate and list-decoding radius is best-possible for any code with list size less than exponential in the block length. By achieving this trade-off between rate and list-decoding radius we improve a recent result of Guo, Li, Shangguan, Tamo, and Wootters, and resolve the main motivating question of their work. Moreover, while their result requires the field to be exponentially large in the block length, we only need the field size to be polynomially large (and in fact, almost-linear suffices). We deduce our main result from a more general theorem, in which we prove good list-decodability properties of random puncturings of any given code with very large distance. 
    more » « less
  2. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S = 2^v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S^2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S. This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  3. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  4. Lapidoth, Amos ; Moser, Stefan M (Ed.)
    This paper introduces extensions to data-driven polar decoders, enabling list decoding and accommodating asymmetric input distributions. These are crucial steps to develop data-driven codes that 1) achieve capacity and 2) are competitive in moderate block lengths. We commence by integrating list de- coding into the data-driven polar codes, which significantly alleviates the inherent error propagation issues associated with successive cancellation decoding. Secondly, we expand the applicability of these codes to channels with stationary, non-uniform input distributions by incorporating the Honda-Yamamoto scheme. Both modifications are computationally efficient and do not require an explicit channel model. Numerical results validate the efficacy of our contributions, which offer a robust and versatile coding mechanism for various channel conditions. 
    more » « less
  5. Convolutional codes have been widely studied and used in many systems. As the number of memory elements increases, frame error rate (FER) improves but computational complexity increases exponentially. Recently, decoders that achieve reduced average complexity through list decoding have been demonstrated when the convolutional encoder polynomials share a common factor that can be understood as a CRC or more generally an expurgating linear function (ELF). However, classical convolutional codes avoid such common factors because they result in a catastrophic encoder. This paper provides a way to access the complexity reduction possible with list decoding even when the convolutional encoder polynomials do not share a common factor. Decomposing the original code into component encoders that fully exclude some polynomials can allow an ELF to be factored from each component. Dual list decoding of the component encoders can often find the ML codeword. Including a fallback to regular Viterbi decoding yields excellent FER performance while requiring less average complexity than always performing Viterbi on the original trellis. A best effort dual list decoder that avoids Viterbi has performance similar to the ML decoder. Component encoders that have a shared polynomial allow for even greater complexity reduction. 
    more » « less