skip to main content


Title: Neural Normalized Min-Sum Message-Passing vs. Viterbi Decoding for the CCSDS Line Product Code
The Consultative Committee for Space Data Systems (CCSDS) 141.11-O-1 Line Product Code (LPC) provides a rare opportunity to compare maximum-likelihood decoding and message passing. The LPC considered in this paper is intended to serve as the inner code in conjunction with a (255,239) Reed Solomon (RS) code whose symbols are bytes of data. This paper represents the 141.11-O-1 LPC as a bipartite graph and uses that graph to formulate both maximum likelihood (ML) and message passing algorithms. ML decoding must, of course, have the best frame error rate (FER) performance. However, a fixed point implementation of a Neural-Normalized MinSum (N-NMS) message passing decoder closely approaches ML performance with a significantly lower complexity.  more » « less
Award ID(s):
1911166
PAR ID:
10398476
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
ICC 2022 - IEEE International Conference on Communications
Page Range / eLocation ID:
2375 to 2380
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. An expurgating linear function (ELF) is an outer code that disallows low-weight codewords of the inner code. ELFs can be designed either to maximize the minimum distance or to minimize the codeword error rate (CER) of the expurgated code. A list-decoding sieve can efficiently identify ELFs that maximize the minimum distance of the expurgated code. For convolutional inner codes, this paper provides analytical distance spectrum union (DSU) bounds on the CER of the concatenated code. For short codeword lengths, ELFs transform a good inner code into a great concatenated code. For a constant message size of K = 64 bits or constant codeword blocklength of N = 152 bits, an ELF can reduce the gap at CER 10−6 between the DSU and the random-coding union (RCU) bounds from over 1 dB for the inner code alone to 0.23 dB for the concatenated code. The DSU bounds can also characterize puncturing that mitigates the rate overhead of the ELF while maintaining the DSU-to-RCU gap. List Viterbi decoding guided by the ELF achieves maximum likelihood (ML) decoding of the concatenated code with a sufficiently large list size. The rate-K/(K+m) ELF outer code reduces rate and list decoding increases decoder complexity. As SNR increases, the average list size converges to 1 and average complexity is similar to Viterbi decoding on the trellis of the inner code. For rare large-magnitude noise events, which occur less often than the FER of the inner code, a deep search in the list finds the ML codeword. 
    more » « less
  2. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S = 2^v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S^2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S. This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  3. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less
  4. Reed–Muller (RM) codes exhibit good performance under maximum-likelihood (ML) decoding due to their highly- symmetric structure. In this paper, we explore the question of whether the code symmetry of RM codes can also be exploited to achieve near-ML performance in practice. The main idea is to apply iterative decoding to a highly-redundant parity-check (PC) matrix that contains only the minimum-weight dual codewords as rows. As examples, we consider the peeling decoder for the binary erasure channel, linear-programming and belief propagation (BP) decoding for the binary-input additive white Gaussian noise channel, and bit-flipping and BP decoding for the binary symmetric channel. For short block lengths, it is shown that near-ML performance can indeed be achieved in many cases. We also propose a method to tailor the PC matrix to the received observation by selecting only a small fraction of useful minimum-weight PCs before decoding begins. This allows one to both improve performance and significantly reduce complexity compared to using the full set of minimum-weight PCs. 
    more » « less
  5. Iterative decoding of graph-based codes and sparse recovery through approximate message passing (AMP) are two research areas that have seen monumental progress in recent decades. Inspired by these advances, this article introduces sparse regression LDPC codes (SR-LDPC codes) and their decoding. Sparse regression codes (SPARCs) are a class of error correcting codes that build on ideas from compressed sensing and can be decoded using AMP. In certain settings, SPARCs are known to achieve capacity; yet, their performance suffers at finite block lengths. Likewise, low-density parity-check (LDPC) codes can be decoded efficiently using belief propagation and can also be capacity achieving. This article introduces a novel concatenated coding structure that combines an LDPC outer code with a SPARC-inspired inner code. Efficient decoding for such a code can be achieved using AMP with a denoiser that performs belief propagation on the factor graph of the outer LDPC code. The proposed framework exhibits performance improvements over SPARCs and standard LDPC codes for finite block lengths and results in a steep waterfall in error performance, a phenomenon not observed in uncoded SPARCs. 
    more » « less