skip to main content


Title: Suspicion Distillation Gradient Descent Bit-Flipping Algorithm
We propose a novel variant of the gradient descent bit-flipping (GDBF) algorithm for decoding low-density parity-check (LDPC) codes over the binary symmetric channel. The new bit-flipping rule is based on the reliability information passed from neighboring nodes in the corresponding Tanner graph. The name SuspicionDistillation reflects the main feature of the algorithm—that in every iteration, we assign a level of suspicion to each variable node about its current bit value. The level of suspicion of a variable node is used to decide whether the corresponding bit will be flipped. In addition, in each iteration, we determine the number of satisfied and unsatisfied checks that connect a suspicious node with other suspicious variable nodes. In this way, in the course of iteration, we “distill” such suspicious bits and flip them. The deterministic nature of the proposed algorithm results in a low-complexity implementation, as the bit-flipping rule can be obtained by modifying the original GDBF rule by using basic logic gates, and the modification is not applied in all decoding iterations. Furthermore, we present a more general framework based on deterministic re-initialization of the decoder input. The performance of the resulting algorithm is analyzed for the codes with various code lengths, and significant performance improvements are observed compared to the state-of-the-art hard-decision-decoding algorithms.  more » « less
Award ID(s):
2106189
NSF-PAR ID:
10340042
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Entropy
Volume:
24
Issue:
4
ISSN:
1099-4300
Page Range / eLocation ID:
558
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Neural Normalized MinSum (N-NMS) decoding delivers better frame error rate (FER) performance on linear block codes than conventional Normalized MinSum (NMS) by assigning dynamic multiplicative weights to each check-to-variable node message in each iteration. Previous N-NMS efforts primarily investigated short block codes (N < 1000), because the number of N-NMS parameters required to be trained scales proportionately to the number of edges in the parity check matrix and the number of iterations. This imposes an impractical memory requirement for conventional tools such as Pytorch and Tensorflow to create the neural network and store gradients. This paper provides efficient methods of training the parameters of N-NMS decoders that support longer block lengths. Specifically, this paper introduces a family of Neural 2-dimensional Normalized (N-2D-NMS) decoders with various reduced parameter sets and shows how performance varies with the parameter set selected. The N-2D-NMS decoders share weights with respect to check node and/or variable node degree. Simulation results justify a reduced parameter set, showing that the trained weights of N- NMS have a smaller value for the neurons corresponding to larger check/variable node degree. Further simulation results on a (3096,1032) Protograph-Based Raptor-Like (PBRL) code show that the N-2D-NMS decoder can achieve the same FER as N- NMS while also providing at least a 99.7% parameter reduction. Furthermore, the N-2D-NMS decoder for the (16200,7200) DVBS- 2 standard LDPC code shows a lower error floor than belief propagation. Finally, this paper proposes a hybrid decoder training structure that utilizes a neural network which combines a feedforward module with a recurrent module. The decoding performance and parameter reduction of the hybrid training depends on the length of recurrent module of the neural network. 
    more » « less
  2. null (Ed.)
    In this paper, we introduce two new methods of mitigating decoder error propagation for low-latency sliding window decoding (SWD) of spatially coupled low density parity check (SC-LDPC) codes. Building on the recently introduced idea of check node (CN) doping of regular SC-LDPC codes, here we employ variable node (VN) doping to fix (set to a known value) a subset of variable nodes in the coupling chain. Both of these doping methods have the effect of allowing SWD to recover from error propagation, at a cost of a slight rate loss. Experimental results show that, similar to CN doping, VN doping improves performance by up to two orders of magnitude compared to undoped SC-LDPC codes in the typical signal-to-noise ratio operating range. Further, compared to CN doping, VN doping has the advantage of not requiring any changes to the decoding process.In addition, a log-likelihood-ratio based window extension algorithm is proposed to reduce the effect of error propagation. Using this approach, we show that decoding latency can be reduced by up to a significant fraction without suffering any loss in performance 
    more » « less
  3. In this paper, we introduce two new methods of mitigating decoder error propagation for low-latency sliding window decoding (SWD) of spatially coupled low-density parity-check (SC-LDPC) codes. Building on the recently introduced idea of check node (CN) doping of regular SC-LDPC codes, here we employ variable node (VN) doping to fix (set to a known value) a subset of variable nodes in the coupling chain. Both of these doping methods have the effect of allowing SWD to recover from error propagation, at a cost of a slight rate loss. Experimental results show that, similar to CN doping, VN doping improves performance by up to two orders of magnitude compared to un-doped SC-LDPC codes in the typical signal-to-noise ratio operating range. Further, compared to CN doping, VN doping has the advantage of not requiring any changes to the decoding process. In addition, a log-likelihood-ratio based window extension algorithm is proposed to reduce the effect of error propagation. Using this approach, we show that decoding latency can be reduced by up to a significant fraction without suffering any loss in performance. 
    more » « less
  4. null (Ed.)
    Neural Normalized MinSum (N-NMS) decoding delivers better frame error rate (FER) performance on linear block codes than conventional normalized MinSum (NMS) by assigning dynamic multiplicative weights to each check-to-variable message in each iteration. Previous N-NMS efforts have primarily investigated short-length block codes (N < 1000), because the number of N-NMS parameters to be trained is proportional to the number of edges in the parity check matrix and the number of iterations, which imposes am impractical memory requirement when Pytorch or Tensorflow is used for training. This paper provides efficient approaches to training parameters of N-NMS that support N-NMS for longer block lengths. Specifically, this paper introduces a family of neural 2-dimensional normalized (N-2D-NMS) decoders with with various reduced parameter sets and shows how performance varies with the parameter set selected. The N-2D-NMS decoders share weights with respect to check node and/or variable node degree. Simulation results justify this approach, showing that the trained weights of N-NMS have a strong correlation to the check node degree, variable node degree, and iteration number. Further simulation results on the (3096,1032) protograph-based raptor-like (PBRL) code show that N-2D-NMS decoder can achieve the same FER as N-NMS with significantly fewer parameters required. The N-2D-NMS decoder for a (16200,7200) DVBS-2 standard LDPC code shows a lower error floor than belief propagation. Finally, a hybrid decoding structure combining a feedforward structure with a recurrent structure is proposed in this paper. The hybrid structure shows similar decoding performance to full feedforward structure, but requires significantly fewer parameters. 
    more » « less
  5. In this paper we propose the approaches that combine two types of iterative decoding algorithms that are usually used for decoding of low density parity check codes (LDPC). One strategy is based on a low-complexity bit-flipping algorithm, and the proposed modification enable significant performance improvement, with no significant increase of the average computing complexity. The other strategy is based on belief propagation decoder, and the resulting decoder has improved error correction capabilities for the codes with short codeword length. 
    more » « less