skip to main content


This content will become publicly available on October 19, 2024

Title: Soft syndrome iterative decoding of quantum LDPC codes and hardware architectures
Abstract

In practical quantum error correction implementations, the measurement of syndrome information is an unreliable step—typically modeled as a binary measurement outcome flipped with some probability. However, the measured syndrome is in fact a discretized value of the continuous voltage or current values obtained in the physical implementation of the syndrome extraction. In this paper, we use this “soft” or analog information to benefit iterative decoders for decoding quantum low-density parity-check (QLDPC) codes. Syndrome-based iterative belief propagation decoders are modified to utilize the soft syndrome to correct both data and syndrome errors simultaneously. We demonstrate the advantages of the proposed scheme not only in terms of comparison of thresholds and logical error rates for quasi-cyclic lifted-product QLDPC code families but also with faster convergence of iterative decoders. Additionally, we derive hardware (FPGA) architectures of these soft syndrome decoders and obtain similar performance in terms of error correction to the ideal models even with reduced precision in the soft information. The total latency of the hardware architectures is about 600 ns (for the QLDPC codes considered) in a 20 nm CMOS process FPGA device, and the area overhead is almost constant—less than 50% compared to min-sum decoders with noisy syndromes.

 
more » « less
Award ID(s):
2100013 2052751 2106189 2027844 1855879
NSF-PAR ID:
10493559
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Springer
Date Published:
Journal Name:
EPJ Quantum Technology
Volume:
10
Issue:
1
ISSN:
2662-4400
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent constructions of quantum low-density parity-check (QLDPC) codes provide optimal scaling of the number of logical qubits and the minimum distance in terms of the code length, thereby opening the door to fault-tolerant quantum systems with minimal resource overhead. However, the hardware path from nearest-neighbor-connection-based topological codes to long-range-interaction-demanding QLDPC codes is likely a challenging one. Given the practical difficulty in building a monolithic architecture for quantum systems, such as computers, based on optimal QLDPC codes, it is worth considering a distributed implementation of such codes over a network of interconnected medium-sized quantum processors. In such a setting, all syndrome measurements and logical operations must be performed through the use of high-fidelity shared entangled states between the processing nodes. Since probabilistic many-to-1 distillation schemes for purifying entanglement are inefficient, we investigate quantum error correction based entanglement purification in this work. Specifically, we employ QLDPC codes to distill GHZ states, as the resulting high-fidelity logical GHZ states can interact directly with the code used to perform distributed quantum computing (DQC), e.g. for fault-tolerant Steane syndrome extraction. This protocol is applicable beyond the application of DQC since entanglement distribution and purification is a quintessential task of any quantum network. We use the min-sum algorithm (MSA) based iterative decoder with a sequential schedule for distilling3-qubit GHZ states using a rate0.118family of lifted product QLDPC codes and obtain an input fidelity threshold of0.7974under i.i.d. single-qubit depolarizing noise. This represents the best threshold for a yield of0.118for any GHZ purification protocol. Our results apply to larger size GHZ states as well, where we extend our technical result about a measurement property of3-qubit GHZ states to construct a scalable GHZ purification protocol.

     
    more » « less
  2. Iterative decoders for finite length quantum low-density parity-check (QLDPC) codes are attractive because their hardware complexity scales only linearly with the number of physical qubits. However, they are impacted by short cycles, detrimental graphical configurations known as trapping sets (TSs) present in a code graph as well as symmetric degeneracy of errors. These factors significantly degrade the decoder decoding probability performance and cause so-called error floor. In this paper, we establish a systematic methodology by which one can identify and classify quantum trapping sets (QTSs) according to their topological structure and decoder used. The conventional definition of a TS from classical error correction is generalized to address the syndrome decoding scenario for QLDPC codes. We show that the knowledge of QTSs can be used to design better QLDPC codes and decoders. Frame error rate improvements of two orders of magnitude in the error floor regime are demonstrated for some practical finite-length QLDPC codes without requiring any post-processing. 
    more » « less
  3. Quantum error correction has recently been shown to benefit greatly from specific physical encodings of the code qubits. In particular, several researchers have considered the individual code qubits being encoded with the continuous variable GottesmanKitaev-Preskill (GKP) code, and then imposed an outer discrete-variable code such as the surface code on these GKP qubits. Under such a concatenation scheme, the analog information from the inner GKP error correction improves the noise threshold of the outer code. However, the surface code has vanishing rate and demands a lot of resources with growing distance. In this work, we concatenate the GKP code with generic quantum low-density parity-check (QLDPC) codes and demonstrate a natural way to exploit the GKP analog information in iterative decoding algorithms. We first show the noise thresholds for two lifted product QLDPC code families, and then show the improvements of noise thresholds when the iterative decoder – a hardware-friendly min-sum algorithm (MSA) – utilizes the GKP analog information. We also show that, when the GKP analog information is combined with a sequential update schedule for MSA, the scheme surpasses the well-known CSS Hamming bound for these code families. Furthermore, we observe that the GKP analog information helps the iterative decoder in escaping harmful trapping sets in the Tanner graph of the QLDPC code, thereby eliminating or significantly lowering the error floor of the logical error rate curves. Finally, we discuss new fundamental and practical questions that arise from this work on channel capacity under GKP analog information, and on improving decoder design and analysis. 
    more » « less
  4. Abstract

    Large-scale quantum computers will inevitably need quantum error correction to protect information against decoherence. Traditional error correction typically requires many qubits, along with high-efficiency error syndrome measurement and real-time feedback. Autonomous quantum error correction instead uses steady-state bath engineering to perform the correction in a hardware-efficient manner. In this work, we develop a new autonomous quantum error correction scheme that actively corrects single-photon loss and passively suppresses low-frequency dephasing, and we demonstrate an important experimental step towards its full implementation with transmons. Compared to uncorrected encoding, improvements are experimentally witnessed for the logical zero, one, and superposition states. Our results show the potential of implementing hardware-efficient autonomous quantum error correction to enhance the reliability of a transmon-based quantum information processor.

     
    more » « less
  5. null (Ed.)
    Non-uniform message quantization techniques such as reconstruction-computation-quantization (RCQ) improve error-correction performance and decrease hardware complexity of low-density parity-check (LDPC) decoders that use a flooding schedule. Layered MinSum RCQ (L-msRCQ) enables message quantization to be utilized for layered decoders and irregular LDPC codes. We investigate field-programmable gate array (FPGA) implementations of L-msRCQ decoders. Three design methods for message quantization are presented, which we name the Lookup, Broadcast, and Dribble methods. The decoding performance and hardware complexity of these schemes are compared to a layered offset MinSum (OMS) decoder. Simulation results on a (16384, 8192) protograph-based raptor-like (PBRL) LDPC code show that a 4-bit L-msRCQ decoder using the Broadcast method can achieve a 0.03 dB improvement in error-correction performance while using 12% fewer registers than the OMS decoder. A Broadcast-based 3-bit L-msRCQ decoder uses 15% fewer lookup tables, 18% fewer registers, and 13% fewer routed nets than the OMS decoder, but results in a 0.09 dB loss in performance. 
    more » « less