skip to main content


Title: Single-Shot Decoding of Good Quantum LDPC Codes
Abstract

Quantum Tanner codes constitute a family of quantum low-density parity-check codes with good parameters, i.e., constant encoding rate and relative distance. In this article, we prove that quantum Tanner codes also facilitate single-shot quantum error correction (QEC) of adversarial noise, where one measurement round (consisting of constant-weight parity checks) suffices to perform reliable QEC even in the presence of measurement errors. We establish this result for both the sequential and parallel decoding algorithms introduced by Leverrier and Zémor. Furthermore, we show that in order to suppress errors over multiple repeated rounds of QEC, it suffices to run the parallel decoding algorithm for constant time in each round. Combined with good code parameters, the resulting constant-time overhead of QEC and robustness to (possibly time-correlated) adversarial noise make quantum Tanner codes alluring from the perspective of quantum fault-tolerant protocols.

 
more » « less
PAR ID:
10495438
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Communications in Mathematical Physics
Volume:
405
Issue:
3
ISSN:
0010-3616
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Minimum-Weight Perfect Matching (MWPM) decoder is widely used in Quantum Error Correction (QEC) decoding. Despite its high accuracy, existing implementations of the MWPM decoder cannot catch up with quantum hardware, e.g., 1 million measurements per second for superconducting qubits. They suffer from a backlog of measurements that grows exponentially and as a result, cannot realize the power of quantum computation. We design and implement a fast MWPM decoder, called Parity Blossom, which reaches a time complexity almost proportional to the number of defect measurements. We further design and implement a parallel version of Parity Blossom called Fusion Blossom. Given a practical circuit-level noise of 0.1%, Fusion Blossom can decode a million measurement rounds per second up to a code distance of 33. Fusion Blossom also supports stream decoding mode that reaches a 0.7 ms decoding latency at code distance 21 regardless of the measurement rounds. 
    more » « less
  2. A fault-tolerant quantum computer must decode and correct errors faster than they appear. The faster errors can be corrected, the more time the computer can do useful work. The Union-Find (UF) decoder is promising with an average time complexity slightly higher than O(d3). We report a distributed version of the UF decoder that exploits parallel computing resources for further speedup. Using an FPGA-based implementation, we empirically show that this distributed UF decoder has a sublinear average time complexity with regard to d, given O(d3) parallel computing resources. The decoding time per measurement round decreases as d increases, a first time for a quantum error decoder. The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. We are able to implement d up to 21 with a Xilinx VCU129 FPGA, for which an average decoding time is 11.5 ns per measurement round under phenomenological noise of 0.1%, significantly faster than any existing decoder implementation. Since the decoding time per measurement round of Helios decreases with d, Helios can decode a surface code of arbitrarily large d without a growing backlog. 
    more » « less
  3. A fault-tolerant quantum computer must decode and correct errors faster than they appear to prevent exponential slowdown due to error correction. The Union-Find (UF) decoder is promising with an average time complexity slightly higher than $O(d^3)$. We report a distributed version of the UF decoder that exploits parallel computing resources for further speedup. Using an FPGA-based implementation, we empirically show that this distributed UF decoder has a sublinear average time complexity with regard to $d$, given $O(d^3)$ parallel computing resources. The decoding time per measurement round decreases as $d$ increases, the first time for a quantum error decoder. The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. Using a Xilinx VCU129 FPGA, we successfully implement $d$ up to 21 with an average decoding time of 11.5 ns per measurement round under 0.1\% phenomenological noise, and 23.7 ns for $d=17$ under equivalent circuit-level noise. This performance is significantly faster than any existing decoder implementation. Furthermore, we show that \name can optimize for resource efficiency by decoding $d=51$ on a Xilinx VCU129 FPGA with an average latency of 544 ns per measurement round. 
    more » « less
  4. Abstract The storage and processing of quantum information are susceptible to external noise, resulting in computational errors. A powerful method to suppress these effects is quantum error correction. Typically, quantum error correction is executed in discrete rounds, using entangling gates and projective measurement on ancillary qubits to complete each round of error correction. Here we use direct parity measurements to implement a continuous quantum bit-flip correction code in a resource-efficient manner, eliminating entangling gates, ancillary qubits, and their associated errors. An FPGA controller actively corrects errors as they are detected, achieving an average bit-flip detection efficiency of up to 91%. Furthermore, the protocol increases the relaxation time of the protected logical qubit by a factor of 2.7 over the relaxation times of the bare comprising qubits. Our results showcase resource-efficient stabilizer measurements in a multi-qubit architecture and demonstrate how continuous error correction codes can address challenges in realizing a fault-tolerant system. 
    more » « less
  5. null (Ed.)
    Generalized low-density parity-check (GLDPC) codes, where the single parity-check (SPC) nodes are replaced by generalized constraint (GC) nodes, are known to offer a reduced gap to capacity when compared with conventional LDPC codes, while also maintaining linear growth of minimum distance. However, for certain classes of practical GLDPC codes, there remains a gap to capacity even when utilizing blockwise decoding algorithm at GC nodes. In this work, we propose to optimize the design of GLDPC codes where the GC nodes are decoded with a trellis-based bit-wise Bahl-Cocke-Jelinek- Raviv (BCJR) component decoding algorithm. We analyze the asymptotic threshold behavior of GLDPC codes and determine the optimal proportion of the GC nodes in the GLDPC Tanner graph.We show significant performance improvements compared to existing designs with the same order of decoding complexity. 
    more » « less