In this paper, a method for joint source-channel coding (JSCC) based on concatenated spatially coupled low-density parity-check (SC-LDPC) codes is investigated. A construction consisting of two SC-LDPC codes is proposed: one for source coding and the other for channel coding, with a joint belief propagation-based decoder. Also, a novel windowed decoding (WD) scheme is presented with significantly reduced latency and complexity requirements. The asymptotic behavior for various graph node degrees is analyzed using a protograph-based Extrinsic Information Transfer (EXIT) chart analysis for both LDPC block codes with block decoding and for SC-LDPC codes with the WD scheme, showing robust performance for concatenated SC-LDPC codes. Simulation results show a notable performance improvement compared to existing state-of-the-art JSCC schemes based on LDPC codes with comparable latency and complexity constraints.
more »
« less
On Sparse Regression LDPC Codes
Iterative decoding of graph-based codes and sparse recovery through approximate message passing (AMP) are two research areas that have seen monumental progress in recent decades. Inspired by these advances, this article introduces sparse regression LDPC codes (SR-LDPC codes) and their decoding. Sparse regression codes (SPARCs) are a class of error correcting codes that build on ideas from compressed sensing and can be decoded using AMP. In certain settings, SPARCs are known to achieve capacity; yet, their performance suffers at finite block lengths. Likewise, low-density parity-check (LDPC) codes can be decoded efficiently using belief propagation and can also be capacity achieving. This article introduces a novel concatenated coding structure that combines an LDPC outer code with a SPARC-inspired inner code. Efficient decoding for such a code can be achieved using AMP with a denoiser that performs belief propagation on the factor graph of the outer LDPC code. The proposed framework exhibits performance improvements over SPARCs and standard LDPC codes for finite block lengths and results in a steep waterfall in error performance, a phenomenon not observed in uncoded SPARCs.
more »
« less
- Award ID(s):
- 2131106
- PAR ID:
- 10545208
- Publisher / Repository:
- IEEE
- Date Published:
- ISBN:
- 978-1-6654-7554-9
- Page Range / eLocation ID:
- 2350 to 2355
- Subject(s) / Keyword(s):
- Error Correction Codes LDPC Codes Sparse Regression Codes Iterative Decoding Message Passing Approximate Message Passing Belief Propagation
- Format(s):
- Medium: X
- Location:
- Taipei, Taiwan
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Neural Normalized MinSum (N-NMS) decoding delivers better frame error rate (FER) performance on linear block codes than conventional normalized MinSum (NMS) by assigning dynamic multiplicative weights to each check-to-variable message in each iteration. Previous N-NMS efforts have primarily investigated short-length block codes (N < 1000), because the number of N-NMS parameters to be trained is proportional to the number of edges in the parity check matrix and the number of iterations, which imposes am impractical memory requirement when Pytorch or Tensorflow is used for training. This paper provides efficient approaches to training parameters of N-NMS that support N-NMS for longer block lengths. Specifically, this paper introduces a family of neural 2-dimensional normalized (N-2D-NMS) decoders with with various reduced parameter sets and shows how performance varies with the parameter set selected. The N-2D-NMS decoders share weights with respect to check node and/or variable node degree. Simulation results justify this approach, showing that the trained weights of N-NMS have a strong correlation to the check node degree, variable node degree, and iteration number. Further simulation results on the (3096,1032) protograph-based raptor-like (PBRL) code show that N-2D-NMS decoder can achieve the same FER as N-NMS with significantly fewer parameters required. The N-2D-NMS decoder for a (16200,7200) DVBS-2 standard LDPC code shows a lower error floor than belief propagation. Finally, a hybrid decoding structure combining a feedforward structure with a recurrent structure is proposed in this paper. The hybrid structure shows similar decoding performance to full feedforward structure, but requires significantly fewer parameters.more » « less
-
null (Ed.)This paper proposes a finite-precision decoding method for low-density parity-check (LDPC) codes that features the three steps of Reconstruction, Computation, and Quantization (RCQ). Unlike Mutual-Information-Maximization Quantized Belief Propagation (MIM-QBP), RCQ can approximate either belief propagation or Min-Sum decoding. MIM-QBP decoders do not work well when the fraction of degree-2 variable nodes is large. However, sometimes a large fraction of degree-2 variable nodes is used to facilitate a fast encoding structure, as seen in the IEEE 802.11 standard and the DVB-S2 standard. In contrast to MIM-QBP, the proposed RCQ decoder may be applied to any off-the-shelf LDPC code, including those with a large fraction of degree-2 variable nodes. Simulations show that a 4-bit Min-Sum RCQ decoder delivers frame error rate (FER) performance within 0.1 dB of floating point belief propagation (BP) for the IEEE 802.11 standard LDPC code in the low SNR region. The RCQ decoder actually outperforms floating point BP and Min-Sum in the high SNR region were FER less than 10 −5 . This paper also introduces Hierarchical Dynamic Quantization (HDQ) to design the time-varying non-uniform quantizers required by RCQ decoders. HDQ is a low-complexity design technique that is slightly sub-optimal. Simulation results comparing HDQ and optimal quantization on the symmetric binary-input memoryless additive white Gaussian noise channel show a mutual information loss of less than 10 −6 bits, which is negligible in practice.more » « less
-
Error correction coding schemes with local-global decoding are motivated by practical data storage applications where a balance must be achieved between low latency read access and high data reliability. As an example, consider a 4KB codeword, consisting of four 1KB subblocks, that supports a local-global decoding architecture. Local decoding can provide reliable, low-latency access to individual 1KB subblocks under good channel conditions, while global decoding can provide a “safety-net” for recovery of the entire 4KB block when local decoding fails under bad channel conditions. Recently, Ram and Cassuto have proposed such local-global decoding architectures for LDPC codes and spatially coupled LDPC codes. In this paper, we investigate a coupled polar code architecture that supports both local and global decoding. The coupling scheme incorporates a systematic outer polar code and a partitioned mapping of the outer codeword to semipolarized bit-channels of the inner polar codes. Error rate simulation results are presented for 2 and 4 subblocks.more » « less
-
This article presents a novel system,LLDPC,1which brings Low-Density Parity-Check (LDPC) codes into Long Range (LoRa) networks to improve Forward Error Correction, a task currently managed by less efficient Hamming codes. Three challenges in achieving this are addressed: First, Chirp Spread Spectrum (CSS) modulation used by LoRa produces only hard demodulation outcomes, whereas LDPC decoding requires Log-Likelihood Ratios (LLR) for each bit. We solve this by developing a CSS-specific LLR extractor. Second, we improve LDPC decoding efficiency by using symbol-level information to fine-tune LLRs of error-prone bits. Finally, to minimize the decoding latency caused by the computationally heavy Soft Belief Propagation (SBP) algorithm typically used in LDPC decoding, we apply graph neural networks to accelerate the process. Our results show thatLLDPCextends default LoRa’s lifetime by 86.7% and reduces SBP algorithm decoding latency by 58.09×.more » « less