skip to main content


This content will become publicly available on January 24, 2025

Title: Entanglement Purification with Quantum LDPC Codes and Iterative Decoding

Recent constructions of quantum low-density parity-check (QLDPC) codes provide optimal scaling of the number of logical qubits and the minimum distance in terms of the code length, thereby opening the door to fault-tolerant quantum systems with minimal resource overhead. However, the hardware path from nearest-neighbor-connection-based topological codes to long-range-interaction-demanding QLDPC codes is likely a challenging one. Given the practical difficulty in building a monolithic architecture for quantum systems, such as computers, based on optimal QLDPC codes, it is worth considering a distributed implementation of such codes over a network of interconnected medium-sized quantum processors. In such a setting, all syndrome measurements and logical operations must be performed through the use of high-fidelity shared entangled states between the processing nodes. Since probabilistic many-to-1 distillation schemes for purifying entanglement are inefficient, we investigate quantum error correction based entanglement purification in this work. Specifically, we employ QLDPC codes to distill GHZ states, as the resulting high-fidelity logical GHZ states can interact directly with the code used to perform distributed quantum computing (DQC), e.g. for fault-tolerant Steane syndrome extraction. This protocol is applicable beyond the application of DQC since entanglement distribution and purification is a quintessential task of any quantum network. We use the min-sum algorithm (MSA) based iterative decoder with a sequential schedule for distilling3-qubit GHZ states using a rate0.118family of lifted product QLDPC codes and obtain an input fidelity threshold of0.7974under i.i.d. single-qubit depolarizing noise. This represents the best threshold for a yield of0.118for any GHZ purification protocol. Our results apply to larger size GHZ states as well, where we extend our technical result about a measurement property of3-qubit GHZ states to construct a scalable GHZ purification protocol.

 
more » « less
Award ID(s):
2100013 2106189 2027844 1855879
NSF-PAR ID:
10494563
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Quantum
Date Published:
Journal Name:
Quantum
Volume:
8
ISSN:
2521-327X
Page Range / eLocation ID:
1233
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Minimizing and understanding errors is critical for quantum science, both in noisy intermediate scale quantum (NISQ) devices1and for the quest towards fault-tolerant quantum computation2,3. Rydberg arrays have emerged as a prominent platform in this context4with impressive system sizes5,6and proposals suggesting how error-correction thresholds could be significantly improved by detecting leakage errors with single-atom resolution7,8, a form of erasure error conversion9–12. However, two-qubit entanglement fidelities in Rydberg atom arrays13,14have lagged behind competitors15,16and this type of erasure conversion is yet to be realized for matter-based qubits in general. Here we demonstrate both erasure conversion and high-fidelity Bell state generation using a Rydberg quantum simulator5,6,17,18. When excising data with erasure errors observed via fast imaging of alkaline-earth atoms19–22, we achieve a Bell state fidelity of$$\ge 0.997{1}_{-13}^{+10}$$0.997113+10, which improves to$$\ge 0.998{5}_{-12}^{+7}$$0.998512+7when correcting for remaining state-preparation errors. We further apply erasure conversion in a quantum simulation experiment for quasi-adiabatic preparation of long-range order across a quantum phase transition, and reveal the otherwise hidden impact of these errors on the simulation outcome. Our work demonstrates the capability for Rydberg-based entanglement to reach fidelities in the 0.999 regime, with higher fidelities a question of technical improvements, and shows how erasure conversion can be utilized in NISQ devices. These techniques could be translated directly to quantum-error-correction codes with the addition of long-lived qubits7,22–24.

     
    more » « less
  2. Abstract

    Suppressing errors is the central challenge for useful quantum computing1, requiring quantum error correction (QEC)2–6for large-scale processing. However, the overhead in the realization of error-corrected ‘logical’ qubits, in which information is encoded across many physical qubits for redundancy2–4, poses substantial challenges to large-scale logical quantum computing. Here we report the realization of a programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits. Using logical-level control and a zoned architecture in reconfigurable neutral-atom arrays7, our system combines high two-qubit gate fidelities8, arbitrary connectivity7,9, as well as fully programmable single-qubit rotations and mid-circuit readout10–15. Operating this logical processor with various types of encoding, we demonstrate improvement of a two-qubit logic gate by scaling surface-code6distance fromd = 3 tod = 7, preparation of colour-code qubits with break-even fidelities5, fault-tolerant creation of logical Greenberger–Horne–Zeilinger (GHZ) states and feedforward entanglement teleportation, as well as operation of 40 colour-code qubits. Finally, using 3D [[8,3,2]] code blocks16,17, we realize computationally complex sampling circuits18with up to 48 logical qubits entangled with hypercube connectivity19with 228 logical two-qubit gates and 48 logical CCZ gates20. We find that this logical encoding substantially improves algorithmic performance with error detection, outperforming physical-qubit fidelities at both cross-entropy benchmarking and quantum simulations of fast scrambling21,22. These results herald the advent of early error-corrected quantum computation and chart a path towards large-scale logical processors.

     
    more » « less
  3. Utilizing the framework of\mathbb{Z}_22lattice gauge theories in the context of Pauli stabilizer codes, we present methodologies for simulating fermions via qubit systems on a two-dimensional square lattice. We investigate the symplectic automorphisms of the Pauli module over the Laurent polynomial ring. This enables us to systematically increase the code distances of stabilizer codes while fixing the rate between encoded logical fermions and physical qubits. We identify a family of stabilizer codes suitable for fermion simulation, achieving code distances of d=2,3,4,5,6,7, allowing correction of any\lfloor \frac{d-1}{2} \rfloord12-qubit error. In contrast to the traditional code concatenation approach, our method can increase the code distances without decreasing the (fermionic) code rate. In particular, we explicitly show all stabilizers and logical operators for codes with code distances of d=3,4,5. We provide syndromes for all Pauli errors and invent a syndrome-matching algorithm to compute code distances numerically.

     
    more » « less
  4. The Shor fault-tolerant error correction (FTEC) scheme uses transversal gates and ancilla qubits prepared in the cat state in syndrome extraction circuits to prevent propagation of errors caused by gate faults. For a stabilizer code of distancedthat can correct up tot=(d1)/2errors, the traditional Shor scheme handles ancilla preparation and measurement faults by performing syndrome measurements until the syndromes are repeatedt+1times in a row; in the worst-case scenario,(t+1)2rounds of measurements are required. In this work, we improve the Shor FTEC scheme using an adaptive syndrome measurement technique. The syndrome for error correction is determined based on information from the differences of syndromes obtained from consecutive rounds. Our protocols that satisfy the strong and the weak FTEC conditions require no more than(t+3)2/41rounds and(t+3)2/42rounds, respectively, and are applicable to any stabilizer code. Our simulations of FTEC protocols with the adaptive schemes on hexagonal color codes of small distances verify that our protocols preserve the code distance, can increase the pseudothreshold, and can decrease the average number of rounds compared to the traditional Shor scheme. We also find that for the code of distanced, our FTEC protocols with the adaptive schemes require no more thandrounds on average.

     
    more » « less
  5. Abstract

    We study two-qubit circuits over the Clifford+CS gate set, which consists of the Clifford gates together with the controlled-phase gate CS = diag(1, 1, 1, i). The Clifford+CS gate set is universal for quantum computation and its elements can be implemented fault-tolerantly in most error-correcting schemes through magic state distillation. Since non-Clifford gates are typically more expensive to perform in a fault-tolerant manner, it is often desirable to construct circuits that use few CS gates. In the present paper, we introduce an efficient and optimal synthesis algorithm for two-qubit Clifford+CS operators. Our algorithm inputs a Clifford+CS operatorUand outputs a Clifford+CS circuit forU, which uses the least possible number of CS gates. Because the algorithm is deterministic, the circuit it associates to a Clifford+CS operator can be viewed as a normal form for that operator. We give an explicit description of these normal forms and use this description to derive a worst-case lower bound of$$5{{\rm{log}}}_{2}(\frac{1}{\epsilon })+O(1)$$5log2(1ϵ)+O(1)on the number of CS gates required toϵ-approximate elements of SU(4). Our work leverages a wide variety of mathematical tools that may find further applications in the study of fault-tolerant quantum circuits.

     
    more » « less