Minimizing and understanding errors is critical for quantum science, both in noisy intermediate scale quantum (NISQ) devices^{1}and for the quest towards faulttolerant quantum computation^{2,3}. Rydberg arrays have emerged as a prominent platform in this context^{4}with impressive system sizes^{5,6}and proposals suggesting how errorcorrection thresholds could be significantly improved by detecting leakage errors with singleatom resolution^{7,8}, a form of erasure error conversion^{9–12}. However, twoqubit entanglement fidelities in Rydberg atom arrays^{13,14}have lagged behind competitors^{15,16}and this type of erasure conversion is yet to be realized for matterbased qubits in general. Here we demonstrate both erasure conversion and highfidelity Bell state generation using a Rydberg quantum simulator^{5,6,17,18}. When excising data with erasure errors observed via fast imaging of alkalineearth atoms^{19–22}, we achieve a Bell state fidelity of
This content will become publicly available on January 24, 2025
Recent constructions of quantum lowdensity paritycheck (QLDPC) codes provide optimal scaling of the number of logical qubits and the minimum distance in terms of the code length, thereby opening the door to faulttolerant quantum systems with minimal resource overhead. However, the hardware path from nearestneighborconnectionbased topological codes to longrangeinteractiondemanding QLDPC codes is likely a challenging one. Given the practical difficulty in building a monolithic architecture for quantum systems, such as computers, based on optimal QLDPC codes, it is worth considering a distributed implementation of such codes over a network of interconnected mediumsized quantum processors. In such a setting, all syndrome measurements and logical operations must be performed through the use of highfidelity shared entangled states between the processing nodes. Since probabilistic manyto1 distillation schemes for purifying entanglement are inefficient, we investigate quantum error correction based entanglement purification in this work. Specifically, we employ QLDPC codes to distill GHZ states, as the resulting highfidelity logical GHZ states can interact directly with the code used to perform distributed quantum computing (DQC), e.g. for faulttolerant Steane syndrome extraction. This protocol is applicable beyond the application of DQC since entanglement distribution and purification is a quintessential task of any quantum network. We use the minsum algorithm (MSA) based iterative decoder with a sequential schedule for distilling$3$qubit GHZ states using a rate$0.118$family of lifted product QLDPC codes and obtain an input fidelity threshold of$\approx 0.7974$under i.i.d. singlequbit depolarizing noise. This represents the best threshold for a yield of$0.118$for any GHZ purification protocol. Our results apply to larger size GHZ states as well, where we extend our technical result about a measurement property of$3$qubit GHZ states to construct a scalable GHZ purification protocol.
more » « less NSFPAR ID:
 10494563
 Publisher / Repository:
 Quantum
 Date Published:
 Journal Name:
 Quantum
 Volume:
 8
 ISSN:
 2521327X
 Page Range / eLocation ID:
 1233
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract , which improves to$$\ge 0.997{1}_{13}^{+10}$$ $\ge 0.997{1}_{13}^{+10}$ when correcting for remaining statepreparation errors. We further apply erasure conversion in a quantum simulation experiment for quasiadiabatic preparation of longrange order across a quantum phase transition, and reveal the otherwise hidden impact of these errors on the simulation outcome. Our work demonstrates the capability for Rydbergbased entanglement to reach fidelities in the 0.999 regime, with higher fidelities a question of technical improvements, and shows how erasure conversion can be utilized in NISQ devices. These techniques could be translated directly to quantumerrorcorrection codes with the addition of longlived qubits^{7,22–24}.$$\ge 0.998{5}_{12}^{+7}$$ $\ge 0.998{5}_{12}^{+7}$ 
Abstract Suppressing errors is the central challenge for useful quantum computing^{1}, requiring quantum error correction (QEC)^{2–6}for largescale processing. However, the overhead in the realization of errorcorrected ‘logical’ qubits, in which information is encoded across many physical qubits for redundancy^{2–4}, poses substantial challenges to largescale logical quantum computing. Here we report the realization of a programmable quantum processor based on encoded logical qubits operating with up to 280 physical qubits. Using logicallevel control and a zoned architecture in reconfigurable neutralatom arrays^{7}, our system combines high twoqubit gate fidelities^{8}, arbitrary connectivity^{7,9}, as well as fully programmable singlequbit rotations and midcircuit readout^{10–15}. Operating this logical processor with various types of encoding, we demonstrate improvement of a twoqubit logic gate by scaling surfacecode^{6}distance from
d = 3 tod = 7, preparation of colourcode qubits with breakeven fidelities^{5}, faulttolerant creation of logical Greenberger–Horne–Zeilinger (GHZ) states and feedforward entanglement teleportation, as well as operation of 40 colourcode qubits. Finally, using 3D [[8,3,2]] code blocks^{16,17}, we realize computationally complex sampling circuits^{18}with up to 48 logical qubits entangled with hypercube connectivity^{19}with 228 logical twoqubit gates and 48 logical CCZ gates^{20}. We find that this logical encoding substantially improves algorithmic performance with error detection, outperforming physicalqubit fidelities at both crossentropy benchmarking and quantum simulations of fast scrambling^{21,22}. These results herald the advent of early errorcorrected quantum computation and chart a path towards largescale logical processors. 
Utilizing the framework of
lattice gauge theories in the context of Pauli stabilizer codes, we present methodologies for simulating fermions via qubit systems on a twodimensional square lattice. We investigate the symplectic automorphisms of the Pauli module over the Laurent polynomial ring. This enables us to systematically increase the code distances of stabilizer codes while fixing the rate between encoded logical fermions and physical qubits. We identify a family of stabilizer codes suitable for fermion simulation, achieving code distances of d=2,3,4,5,6,7, allowing correction of any\mathbb{Z}_2 ${\mathbb{Z}}_{2}$ qubit error. In contrast to the traditional code concatenation approach, our method can increase the code distances without decreasing the (fermionic) code rate. In particular, we explicitly show all stabilizers and logical operators for codes with code distances of d=3,4,5. We provide syndromes for all Pauli errors and invent a syndromematching algorithm to compute code distances numerically.\lfloor \frac{d1}{2} \rfloor $\lfloor \frac{d1}{2}\rfloor $ 
The Shor faulttolerant error correction (FTEC) scheme uses transversal gates and ancilla qubits prepared in the cat state in syndrome extraction circuits to prevent propagation of errors caused by gate faults. For a stabilizer code of distance$d$that can correct up to$t=\lfloor (d1)/2\rfloor $errors, the traditional Shor scheme handles ancilla preparation and measurement faults by performing syndrome measurements until the syndromes are repeated$t+1$times in a row; in the worstcase scenario,$(t+1{)}^{2}$rounds of measurements are required. In this work, we improve the Shor FTEC scheme using an adaptive syndrome measurement technique. The syndrome for error correction is determined based on information from the differences of syndromes obtained from consecutive rounds. Our protocols that satisfy the strong and the weak FTEC conditions require no more than$(t+3{)}^{2}/41$rounds and$(t+3{)}^{2}/42$rounds, respectively, and are applicable to any stabilizer code. Our simulations of FTEC protocols with the adaptive schemes on hexagonal color codes of small distances verify that our protocols preserve the code distance, can increase the pseudothreshold, and can decrease the average number of rounds compared to the traditional Shor scheme. We also find that for the code of distance$d$, our FTEC protocols with the adaptive schemes require no more than$d$rounds on average.

Abstract We study twoqubit circuits over the Clifford+CS gate set, which consists of the Clifford gates together with the controlledphase gate CS = diag(1, 1, 1,
i ). The Clifford+CS gate set is universal for quantum computation and its elements can be implemented faulttolerantly in most errorcorrecting schemes through magic state distillation. Since nonClifford gates are typically more expensive to perform in a faulttolerant manner, it is often desirable to construct circuits that use few CS gates. In the present paper, we introduce an efficient and optimal synthesis algorithm for twoqubit Clifford+CS operators. Our algorithm inputs a Clifford+CS operatorU and outputs a Clifford+CS circuit forU , which uses the least possible number of CS gates. Because the algorithm is deterministic, the circuit it associates to a Clifford+CS operator can be viewed as a normal form for that operator. We give an explicit description of these normal forms and use this description to derive a worstcase lower bound of on the number of CS gates required to$$5{{\rm{log}}}_{2}(\frac{1}{\epsilon })+O(1)$$ $5{\mathrm{log}}_{2}\left(\frac{1}{\u03f5}\right)+O\left(1\right)$ϵ approximate elements of SU(4). Our work leverages a wide variety of mathematical tools that may find further applications in the study of faulttolerant quantum circuits.