skip to main content


Title: A Threshold-Based Min-Sum Algorithm to Lower the Error Floors of Quantized LDPC Decoders
For decoding low-density parity-check (LDPC) codes, the attenuated min-sum algorithm (AMSA) and the offset min-sum algorithm (OMSA) can outperform the conventional min-sum algorithm (MSA) at low signal-to-noise-ratios (SNRs), i.e., in the “waterfall region” of the bit error rate curve. This paper demonstrates that, for quantized decoders, MSA actually outperforms AMSA and OMSA in the “error floor” region, and that all three algorithms suffer from a relatively high error floor. This motivates the introduction of a modified MSA that is designed to outperform MSA, AMSA, and OMSA across all SNRs. The new algorithm is based on the assumption that trapping sets are the major cause of the error floor for quantized LDPC decoders. A performance estimation tool based on trapping sets is used to verify the effectiveness of the new algorithm and also to guide parameter selection. We also show that the implementation complexity of the new algorithm is only slightly higher than that of AMSA or OMSA. Finally, the simulated performance of the new algorithm, using several classes of LDPC codes (including spatially coupled LDPC codes), is shown to outperform MSA, AMSA, and OMSA across all SNRs.  more » « less
Award ID(s):
1757207 1710920
NSF-PAR ID:
10139122
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE Transactions on Communications
ISSN:
0090-6778
Page Range / eLocation ID:
1 to 1
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. It is well known that for decoding low-density parity-check (LDPC) codes, the attenuated min-sum algorithm (AMSA) and the offset min-sum algorithm (OMSA) can outperform the conventional min-sum algorithm (MSA) at low signal-to-noise-ratios (SNRs). In this paper, we demonstrate that, for quantized LDPC decoders, although the MSA achieves better high SNR performance than the AMSA and OMSA, each of the MSA, AMSA, and OMSA all suffer from a relatively high error floor. Therefore, we propose a novel modification of the MSA for decoding quantized LDPC codes with the aim of lowering the error floor. Compared to the quantized MSA, the proposed modification is also helpful at low SNRs, where it matches the waterfall performance of the quantized AMSA and OMSA. The new algorithm is designed based on the assumption that trapping/absorbing sets (or other problematic graphical objects) are the major cause of the error floor for quantized LDPC decoders, and it aims to reduce the probability that these problematic objects lead to decoding errors. 
    more » « less
  2. null (Ed.)
    This paper proposes a finite-precision decoding method for low-density parity-check (LDPC) codes that features the three steps of Reconstruction, Computation, and Quantization (RCQ). Unlike Mutual-Information-Maximization Quantized Belief Propagation (MIM-QBP), RCQ can approximate either belief propagation or Min-Sum decoding. MIM-QBP decoders do not work well when the fraction of degree-2 variable nodes is large. However, sometimes a large fraction of degree-2 variable nodes is used to facilitate a fast encoding structure, as seen in the IEEE 802.11 standard and the DVB-S2 standard. In contrast to MIM-QBP, the proposed RCQ decoder may be applied to any off-the-shelf LDPC code, including those with a large fraction of degree-2 variable nodes. Simulations show that a 4-bit Min-Sum RCQ decoder delivers frame error rate (FER) performance within 0.1 dB of floating point belief propagation (BP) for the IEEE 802.11 standard LDPC code in the low SNR region. The RCQ decoder actually outperforms floating point BP and Min-Sum in the high SNR region were FER less than 10 −5 . This paper also introduces Hierarchical Dynamic Quantization (HDQ) to design the time-varying non-uniform quantizers required by RCQ decoders. HDQ is a low-complexity design technique that is slightly sub-optimal. Simulation results comparing HDQ and optimal quantization on the symmetric binary-input memoryless additive white Gaussian noise channel show a mutual information loss of less than 10 −6 bits, which is negligible in practice. 
    more » « less
  3. Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis. 
    more » « less
  4. The new 5G communications standard increases data rates and supports low-latency communication that places constraints on the computational complexity of channel decoders. 5G low-density parity-check (LDPC) codes have the so-called protograph-based raptor-like (PBRL) structure which offers inherent rate-compatibility and excellent performance. Practical LDPC decoder implementations use message-passing decoding with finite precision, which becomes coarse as complexity is more severely constrained. Performance degrades as the precision becomes more coarse. Recently, the information bottleneck (IB) method was used to design mutual-information-maximizing lookup tables that replace conventional finite-precision node computations. The IB approach exchanges messages represented by integers with very small bit width. This paper extends the IB principle to the flexible class of PBRL LDPC codes as standardized in 5G. The extensions include puncturing and rate-compatible IB decoder design. As an example of the new approach, a 4-bit information bottleneck decoder is evaluated for PBRL LDPC codes over a typical range of rates. Frame error rate simulations show that the proposed scheme outperforms offset min-sum decoding algorithms and operates very close to double-precision sum-product belief propagation decoding. 
    more » « less
  5. Neural Normalized MinSum (N-NMS) decoding delivers better frame error rate (FER) performance on linear block codes than conventional Normalized MinSum (NMS) by assigning dynamic multiplicative weights to each check-to-variable node message in each iteration. Previous N-NMS efforts primarily investigated short block codes (N < 1000), because the number of N-NMS parameters required to be trained scales proportionately to the number of edges in the parity check matrix and the number of iterations. This imposes an impractical memory requirement for conventional tools such as Pytorch and Tensorflow to create the neural network and store gradients. This paper provides efficient methods of training the parameters of N-NMS decoders that support longer block lengths. Specifically, this paper introduces a family of Neural 2-dimensional Normalized (N-2D-NMS) decoders with various reduced parameter sets and shows how performance varies with the parameter set selected. The N-2D-NMS decoders share weights with respect to check node and/or variable node degree. Simulation results justify a reduced parameter set, showing that the trained weights of N- NMS have a smaller value for the neurons corresponding to larger check/variable node degree. Further simulation results on a (3096,1032) Protograph-Based Raptor-Like (PBRL) code show that the N-2D-NMS decoder can achieve the same FER as N- NMS while also providing at least a 99.7% parameter reduction. Furthermore, the N-2D-NMS decoder for the (16200,7200) DVBS- 2 standard LDPC code shows a lower error floor than belief propagation. Finally, this paper proposes a hybrid decoder training structure that utilizes a neural network which combines a feedforward module with a recurrent module. The decoding performance and parameter reduction of the hybrid training depends on the length of recurrent module of the neural network. 
    more » « less