skip to main content

Title: Purification-based quantum error mitigation of pair-correlated electron simulations

An important measure of the development of quantum computing platforms has been the simulation of increasingly complex physical systems. Before fault-tolerant quantum computing, robust error-mitigation strategies were necessary to continue this growth. Here, we validate recently introduced error-mitigation strategies that exploit the expectation that the ideal output of a quantum algorithm would be a pure state. We consider the task of simulating electron systems in the seniority-zero subspace where all electrons are paired with their opposite spin. This affords a computational stepping stone to a fully correlated model. We compare the performance of error mitigations on the basis of doubling quantum resources in time or in space on up to 20 qubits of a superconducting qubit quantum processor. We observe a reduction of error by one to two orders of magnitude below less sophisticated techniques such as postselection. We study how the gain from error mitigation scales with the system size and observe a polynomial suppression of error with increased resources. Extrapolation of our results indicates that substantial hardware improvements will be required for classically intractable variational chemistry simulations.

more » « less
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Nature Physics
Medium: X Size: p. 1787-1792
["p. 1787-1792"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Quantum computing has the potential to revolutionize computing, but its significant sensitivity to noise requires sophisticated error correction and mitigation. Traditionally, noise on the quantum device is characterized directly through qubit and gate measurements, but this approach has drawbacks in that it does not adequately capture the effect of noise on realistic multi-qubit applications. In this paper, we simulate the relaxation of stationary quantum states on a quantum computer to obtain a unique spectroscopic fingerprint of the computer’s noise. In contrast to traditional approaches, we obtain the frequency profile of the noise as it is experienced by the simulated stationary quantum states. Data from multiple superconducting-qubit IBM processors show that noise generates a bath within the simulation that exhibits both colored noise and non-Markovian behavior. Our results provide a direction for noise mitigation but also suggest how to use noise for quantum simulations of open systems.

    more » « less
  2. Abstract

    Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6logical error per cycle floor set by a single high-energy event (1.6 × 10−7excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.

    more » « less
  3. null (Ed.)
    This work provides quantitative tests of the extent of violation of two inequalities applicable to qubits coupled into Bell states, using IBM's publicly accessible quantum computers. Violations of the inequalities are well established. Our purpose is not to test the inequalities, but rather to determine how well quantum mechanical predictions can be reproduced on quantum computers, given their current fault rates. We present results for the spin projections of two entangled qubits, along three axes A , B , and C , with a fixed angle θ between A and B and a range of angles θ ′ between B and C . For any classical object that can be characterized by three observables with two possible values, inequalities govern relationships among the probabilities of outcomes for the observables, taken pairwise. From set theory, these inequalities must be satisfied by all such classical objects; but quantum systems may violate the inequalities. We have detected clear-cut violations of one inequality in runs on IBM's publicly accessible quantum computers. The Clauser–Horne–Shimony–Holt (CHSH) inequality governs a linear combination S of expectation values of products of spin projections, taken pairwise. Finding S > 2 rules out local, hidden variable theories for entangled quantum systems. We obtained values of S greater than 2 in our runs prior to error mitigation. To reduce the quantitative errors, we used a modification of the error-mitigation procedure in the IBM documentation. We prepared a pair of qubits in the state |00〉, found the probabilities to observe the states |00〉, |01〉, |10〉, and |11〉 in multiple runs, and used that information to construct the first column of an error matrix M . We repeated this procedure for states prepared as |01〉, |10〉, and |11〉 to construct the full matrix M , whose inverse is the filtering matrix. After applying filtering matrices to our averaged outcomes, we have found good quantitative agreement between the quantum computer output and the quantum mechanical predictions for the extent of violation of both inequalities as functions of θ ′. 
    more » « less
  4. Abstract

    Quantum chemistry is a key application area for noisy‐intermediate scale quantum (NISQ) devices, and therefore serves as an important benchmark for current and future quantum computer performance. Previous benchmarks in this field have focused on variational methods for computing ground and excited states of various molecules, including a benchmarking suite focused on the performance of computing ground states for alkali‐hydrides under an array of error mitigation methods. State‐of‐the‐art methods to reach chemical accuracy in hybrid quantum‐classical electronic structure calculations of alkali hydride molecules on NISQ devices from IBM are outlined here. It is demonstrated how to extend the reach of variational eigensolvers with symmetry preserving Ansätze. Next, it is outlined how to use quantum imaginary time evolution and Lanczos as a complementary method to variational techniques, highlighting the advantages of each approach. Finally, a new error mitigation method is demonstrated which uses systematic error cancellation via hidden inverse gate constructions, improving the performance of typical variational algorithms. These results show that electronic structure calculations have advanced rapidly, to routine chemical accuracy for simple molecules, from their inception on quantum computers a few short years ago, and they point to further rapid progress to larger molecules as the power of NISQ devices grows.

    more » « less
  5. Abstract

    One of the most challenging obstacles to realizing exascale computing is minimizing the energy consumption of L2 cache, main memory, and interconnects to that memory. For promising cryogenic computing schemes utilizing Josephson junction superconducting logic, this obstacle is exacerbated by the cryogenic system requirements that expose the technology’s lack of high-density, high-speed and power-efficient memory. Here we demonstrate an array of cryogenic memory cells consisting of a non-volatile three-terminal magnetic tunnel junction element driven by the spin Hall effect, combined with a superconducting heater-cryotron bit-select element. The write energy of these memory elements is roughly 8 pJ with a bit-select element, designed to achieve a minimum overhead power consumption of about 30%. Individual magnetic memory cells measured at 4 K show reliable switching with write error rates below 10−6, and a 4 × 4 array can be fully addressed with bit select error rates of 10−6. This demonstration is a first step towards a full cryogenic memory architecture targeting energy and performance specifications appropriate for applications in superconducting high performance and quantum computing control systems, which require significant memory resources operating at 4 K.

    more » « less