skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: LDPC decoders with re-initializations based on synergy of hard decision and message passing principles
In this paper we propose the approaches that combine two types of iterative decoding algorithms that are usually used for decoding of low density parity check codes (LDPC). One strategy is based on a low-complexity bit-flipping algorithm, and the proposed modification enable significant performance improvement, with no significant increase of the average computing complexity. The other strategy is based on belief propagation decoder, and the resulting decoder has improved error correction capabilities for the codes with short codeword length.  more » « less
Award ID(s):
2027844 2052751 2106189 1855879
PAR ID:
10342843
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Conference on Electronics, Telecommunication, Computing, Automation and Nuclear Engineering
Page Range / eLocation ID:
758 - 763
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The new 5G communications standard increases data rates and supports low-latency communication that places constraints on the computational complexity of channel decoders. 5G low-density parity-check (LDPC) codes have the so-called protograph-based raptor-like (PBRL) structure which offers inherent rate-compatibility and excellent performance. Practical LDPC decoder implementations use message-passing decoding with finite precision, which becomes coarse as complexity is more severely constrained. Performance degrades as the precision becomes more coarse. Recently, the information bottleneck (IB) method was used to design mutual-information-maximizing lookup tables that replace conventional finite-precision node computations. The IB approach exchanges messages represented by integers with very small bit width. This paper extends the IB principle to the flexible class of PBRL LDPC codes as standardized in 5G. The extensions include puncturing and rate-compatible IB decoder design. As an example of the new approach, a 4-bit information bottleneck decoder is evaluated for PBRL LDPC codes over a typical range of rates. Frame error rate simulations show that the proposed scheme outperforms offset min-sum decoding algorithms and operates very close to double-precision sum-product belief propagation decoding. 
    more » « less
  2. In this paper, a method for joint source-channel coding (JSCC) based on concatenated spatially coupled low-density parity-check (SC-LDPC) codes is investigated. A construction consisting of two SC-LDPC codes is proposed: one for source coding and the other for channel coding, with a joint belief propagation-based decoder. Also, a novel windowed decoding (WD) scheme is presented with significantly reduced latency and complexity requirements. The asymptotic behavior for various graph node degrees is analyzed using a protograph-based Extrinsic Information Transfer (EXIT) chart analysis for both LDPC block codes with block decoding and for SC-LDPC codes with the WD scheme, showing robust performance for concatenated SC-LDPC codes. Simulation results show a notable performance improvement compared to existing state-of-the-art JSCC schemes based on LDPC codes with comparable latency and complexity constraints. 
    more » « less
  3. In this paper, we propose a novel message-passing decoding approach that leverages the degeneracy of quantum low-density parity-check codes to enhance decoding performance, eliminating the need for serial scheduling or post-processing. Our focus is on two-block Calderbank-Shor-Steane (CSS) codes, which are composed of symmetric stabilizers that hinder the performance of conventional iterative decoders with uniform update rules. Specifically, our analysis shows that, under the isolation assumption, the min-sum decoder fails to converge when constant-weight errors are applied to symmetric stabilizers, as variable-to-check messages oscillate in every iteration. To address this, we introduce a decoding technique that exploits this oscillatory property by applying distinct update rules: variable nodes in one block utilize messages from previous iterations, while those in the other block are updated conventionally. Logical error-rate results demonstrate that the proposed decoder significantly outperforms the normalized min-sum decoder and achieves competitive performance with belief propagation enhanced by order-zero ordered statistics decoding, all while maintaining linear complexity in the code’s block length. 
    more » « less
  4. Recent experimental advances have made it possible to implement logical multiqubit transversal gates on surface codes in a multitude of platforms. A transversal controlled- (t) gate on two surface codes introduces correlated errors across the code blocks and thus requires modified decoding compared to established methods of decoding surface-code quantum memory (SCQM) or lattice-surgery operations. In this work, we examine and benchmark the performance of three different decoding strategies for the t for scalable fault-tolerant quantum computation. In particular, we present a low-complexity decoder based on minimum-weight perfect matching (MWPM) that achieves the same threshold as the SCQM MWPM decoder. We extend our analysis with a study of tailored decoding of a transversal-teleportation circuit, along with a comparison between the performance of lattice-surgery and transversal operations under Pauli- and erasure-noise models. Our investigation builds toward systematic estimation of the cost of implementing large-scale quantum algorithms based on transversal gates in the surface code. Published by the American Physical Society2025 
    more » « less
  5. Maximum-likelihood (ML) decoding of tail-biting convolutional codes (TBCCs) with S=2v states traditionally requires a separate S-state trellis for each of the S possible starting/ending states, resulting in complexity proportional to S2. Lower-complexity ML decoders for TBCCs have complexity proportional to S log S. This high complexity motivates the use of the wrap-around Viterbi algorithm, which sacrifices ML performance for complexity proportional to S.This paper presents an ML decoder for TBCCs that uses list decoding to achieve an average complexity proportional to S at operational signal-to-noise ratios where the expected list size is close to one. The new decoder uses parallel list Viterbi decoding with a progressively growing list size operating on a single S-state trellis. Decoding does not terminate until the most likely tailbiting codeword has been identified. This approach is extended to ML decoding of tail-biting convolutional codes concatenated with a cyclic redundancy check code as explored recently by Yang et al. and King et al. Constraining the maximum list size further reduces complexity but sacrifices guaranteed ML performance, increasing errors and introducing erasures. 
    more » « less