skip to main content


Title: Erasure conversion for fault-tolerant quantum computing in alkaline earth Rydberg atom arrays
Abstract

Executing quantum algorithms on error-corrected logical qubits is a critical step for scalable quantum computing, but the requisite numbers of qubits and physical error rates are demanding for current experimental hardware. Recently, the development of error correcting codes tailored to particular physical noise models has helped relax these requirements. In this work, we propose a qubit encoding and gate protocol for171Yb neutral atom qubits that converts the dominant physical errors into erasures, that is, errors in known locations. The key idea is to encode qubits in a metastable electronic level, such that gate errors predominantly result in transitions to disjoint subspaces whose populations can be continuously monitored via fluorescence. We estimate that 98% of errors can be converted into erasures. We quantify the benefit of this approach via circuit-level simulations of the surface code, finding a threshold increase from 0.937% to 4.15%. We also observe a larger code distance near the threshold, leading to a faster decrease in the logical error rate for the same number of physical qubits, which is important for near-term implementations. Erasure conversion should benefit any error correcting code, and may also be applied to design new gates and encodings in other qubit platforms.

 
more » « less
Award ID(s):
2120757
NSF-PAR ID:
10369428
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Communications
Volume:
13
Issue:
1
ISSN:
2041-1723
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6logical error per cycle floor set by a single high-energy event (1.6 × 10−7excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.

     
    more » « less
  2. Running quantum programs is fraught with challenges on on today’s noisy intermediate scale quantum (NISQ) devices. Many of these challenges originate from the error characteristics that stem from rapid decoherence and noise during measurement, qubit connections, crosstalk, the qubits themselves, and transformations of qubit state via gates. Not only are qubits not “created equal”, but their noise level also changes over time. IBM is said to calibrate their quantum systems once per day and reports noise levels (errors) at the time of such calibration. This information is subsequently used to map circuits to higher quality qubits and connections up to the next calibration point. This work provides evidence that there is room for improvement over this daily calibration cycle. It contributes a technique to measure noise levels (errors) related to qubits immediately before executing one or more sensitive circuits and shows that just-in-time noise measurements can benefit late physical qubit mappings. With this just-in-time recalibrated transpilation, the fidelity of results is improved over IBM’s default mappings, which only uses their daily calibrations. The framework assess two major sources of noise, namely readout errors (measurement errors) and two-qubit gate/connection errors. Experiments indicate that the accuracy of circuit results improves by 3-304% on average and up to 400% with on-the-fly circuit mappings based on error measurements just prior to application execution. 
    more » « less
  3. Abstract

    Leakage is a particularly damaging error that occurs when a qubit state falls out of its two-level computational subspace. Compared to independent depolarizing noise, leaked qubits may produce many more configurations of harmful correlated errors during error-correction. In this work, we investigate different local codes in the low-error regime of a leakage gate error model. When restricting to bare-ancilla extraction, we observe that subsystem codes are good candidates for handling leakage, as their locality can limit damaging correlated errors. As a case study, we compare subspace surface codes to the subsystem surface codes introduced by Bravyiet al. In contrast to depolarizing noise, subsystem surface codes outperform same-distance subspace surface codes below error rates as high as ⪅ 7.5 × 10−4while offering better per-qubit distance protection. Furthermore, we show that at low to intermediate distances, Bacon–Shor codes offer better per-qubit error protection against leakage in an ion-trap motivated error model below error rates as high as ⪅ 1.2 × 10−3. For restricted leakage models, this advantage can be extended to higher distances by relaxing to unverified two-qubit cat state extraction in the surface code. These results highlight an intrinsic benefit of subsystem code locality to error-corrective performance.

     
    more » « less
  4. Abstract

    The leakage of quantum information out of the two computational states of a qubit into other energy states represents a major challenge for quantum error correction. During the operation of an error-corrected algorithm, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of the logical error with scale, thus challenging the feasibility of quantum error correction as a path towards fault-tolerant quantum computation. Here, we demonstrate a distance-3 surface code and distance-21 bit-flip code on a quantum processor for which leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a tenfold reduction in the steady-state leakage population of the data qubits encoding the logical state and an average leakage population of less than 1 × 10−3throughout the entire device. Our leakage removal process efficiently returns the system back to the computational basis. Adding it to a code circuit would prevent leakage from inducing correlated error across cycles. With this demonstration that leakage can be contained, we have resolved a key challenge for practical quantum error correction at scale.

     
    more » « less
  5. null (Ed.)
    Quantum computers are growing in size, and design decisions are being made now that attempt to squeeze more computation out of these machines. In this spirit, we design a method to boost the computational power of near-term quantum computers by adapting protocols used in quantum error correction to implement "Approximate Quantum Error Correction (AQEC)." By approximating fully-fledged error correction mechanisms, we can increase the compute volume (qubits × gates, or "Simple Quantum Volume (SQV)") of near-term machines. The crux of our design is a fast hardware decoder that can approximately decode detected error syndromes rapidly. Specifically, we demonstrate a proof-of-concept that approximate error decoding can be accomplished online in near-term quantum systems by designing and implementing a novel algorithm in Single-Flux Quantum (SFQ) superconducting logic technology. This avoids a critical decoding backlog, hidden in all offline decoding schemes, that leads to idle time exponential in the number of T gates in a program. Our design utilizes one SFQ processing module per physical qubit. Employing state-of-the-art SFQ synthesis tools, we show that the circuit area, power, and latency are within the constraints of contemporary quantum system designs. Under pure dephasing error models, the proposed accelerator and AQEC solution is able to expand SQV by factors between 3,402 and 11,163 on expected near-term machines. The decoder achieves a 5% accuracy-threshold and pseudo-thresholds of ∼ 5%,4.75%,4.5%, and 3.5% physical error-rates for code distances 3,5,7, and 9. Decoding solutions are achieved in a maximum of ∼20 nanoseconds on the largest code distances studied. By avoiding the exponential idle time in offline decoders, we achieve a 10x reduction in required code distances to achieve the same logical performance as alternative designs. 
    more » « less