skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Trapping Sets of Quantum LDPC Codes
Iterative decoders for finite length quantum low-density parity-check (QLDPC) codes are attractive because their hardware complexity scales only linearly with the number of physical qubits. However, they are impacted by short cycles, detrimental graphical configurations known as trapping sets (TSs) present in a code graph as well as symmetric degeneracy of errors. These factors significantly degrade the decoder decoding probability performance and cause so-called error floor. In this paper, we establish a systematic methodology by which one can identify and classify quantum trapping sets (QTSs) according to their topological structure and decoder used. The conventional definition of a TS from classical error correction is generalized to address the syndrome decoding scenario for QLDPC codes. We show that the knowledge of QTSs can be used to design better QLDPC codes and decoders. Frame error rate improvements of two orders of magnitude in the error floor regime are demonstrated for some practical finite-length QLDPC codes without requiring any post-processing.  more » « less
Award ID(s):
1855879 2027844 1813401 2106189
PAR ID:
10318715
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Quantum
Volume:
5
ISSN:
2521-327X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Quantum low-density parity-check codes are a promising approach to fault-tolerant quantum computation, offering potential advantages in rate and decoding efficiency. Quantum Margulis codes are a new class of QLDPC codes derived from Margulis’ classical LDPC construction via the two-block group algebra framework. We show that quantum Margulis codes, unlike bivariate bicycle codes, which require ordered statistics decoding for effective error correction, can be efficiently decoded using a standard min-sum decoder with linear complexity, when decoded under depolarizing noise. This is attributed to their Tanner graph structure, which does not exhibit group symmetry, thereby mitigating the well-known problem of error degeneracy in QLDPC decoding. To further enhance performance, we propose an algorithm for constructing 2BGA codes with controlled girth, ensuring a minimum girth of 6 or 8, and use it to generate several quantum Margulis codes of length 240 and 642. We validate our approach through numerical simulations, demonstrating that quantum Margulis codes behave significantly better than BB codes in the error floor region, under min-sum decoding. 
    more » « less
  2. Quantum low-density parity-check (QLDPC) codes have emerged as a promising technique for quantum error correction. A variety of decoders have been proposed for QLDPC codes and many utilize belief propagation (BP) decoding in some fashion. However, the use of BP decoding for degenerate QLDPC codes is known to have issues with convergence. These issues are typically attributed to short cycles in the Tanner graph and error patterns with the same syndrome due to code degeneracy. In this work, we propose a decoder for QLDPC codes based on BP guided decimation (BPGD), which has been previously studied for constraint satisfaction and lossy compression problems. This decimation process is applicable to both binary and quaternary BP and it involves sequentially freezing the value of the most reliable qubits to encourage BP convergence. We find that BPGD significantly reduces the BP failure rate due to non-convergence, achieving performance on par with BP with ordered statistics decoding and BP with stabilizer inactivation, without the need to solve systems of linear equations. To explore how and why BPGD improves performance, we discuss several interpretations of BPGD and their connection to BP syndrome decoding. 
    more » « less
  3. Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis. 
    more » « less
  4. For decoding low-density parity-check (LDPC) codes, the attenuated min-sum algorithm (AMSA) and the offset min-sum algorithm (OMSA) can outperform the conventional min-sum algorithm (MSA) at low signal-to-noise-ratios (SNRs), i.e., in the “waterfall region” of the bit error rate curve. This paper demonstrates that, for quantized decoders, MSA actually outperforms AMSA and OMSA in the “error floor” region, and that all three algorithms suffer from a relatively high error floor. This motivates the introduction of a modified MSA that is designed to outperform MSA, AMSA, and OMSA across all SNRs. The new algorithm is based on the assumption that trapping sets are the major cause of the error floor for quantized LDPC decoders. A performance estimation tool based on trapping sets is used to verify the effectiveness of the new algorithm and also to guide parameter selection. We also show that the implementation complexity of the new algorithm is only slightly higher than that of AMSA or OMSA. Finally, the simulated performance of the new algorithm, using several classes of LDPC codes (including spatially coupled LDPC codes), is shown to outperform MSA, AMSA, and OMSA across all SNRs. 
    more » « less
  5. It is well known that for decoding low-density parity-check (LDPC) codes, the attenuated min-sum algorithm (AMSA) and the offset min-sum algorithm (OMSA) can outperform the conventional min-sum algorithm (MSA) at low signal-to-noise-ratios (SNRs). In this paper, we demonstrate that, for quantized LDPC decoders, although the MSA achieves better high SNR performance than the AMSA and OMSA, each of the MSA, AMSA, and OMSA all suffer from a relatively high error floor. Therefore, we propose a novel modification of the MSA for decoding quantized LDPC codes with the aim of lowering the error floor. Compared to the quantized MSA, the proposed modification is also helpful at low SNRs, where it matches the waterfall performance of the quantized AMSA and OMSA. The new algorithm is designed based on the assumption that trapping/absorbing sets (or other problematic graphical objects) are the major cause of the error floor for quantized LDPC decoders, and it aims to reduce the probability that these problematic objects lead to decoding errors. 
    more » « less