skip to main content

This content will become publicly available on December 1, 2022

Title: Quantum circuit cutting with maximum-likelihood tomography
Abstract We introduce maximum-likelihood fragment tomography (MLFT) as an improved circuit cutting technique for running clustered quantum circuits on quantum devices with a limited number of qubits. In addition to minimizing the classical computing overhead of circuit cutting methods, MLFT finds the most likely probability distribution for the output of a quantum circuit, given the measurement data obtained from the circuit’s fragments. We demonstrate the benefits of MLFT for accurately estimating the output of a fragmented quantum circuit with numerical experiments on random unitary circuits. Finally, we show that circuit cutting can estimate the output of a clustered circuit with higher fidelity than full circuit execution, thereby motivating the use of circuit cutting as a standard tool for running clustered circuits on quantum hardware.
Authors:
; ; ;
Award ID(s):
2037984
Publication Date:
NSF-PAR ID:
10276496
Journal Name:
npj Quantum Information
Volume:
7
Issue:
1
ISSN:
2056-6387
Sponsoring Org:
National Science Foundation
More Like this
  1. We describe how classical supercomputing can aid unreliable quantum processors of intermediate size to solve large problem instances reliably. We advocate using a hybrid quantum-classical architecture where larger quantum circuits are broken into smaller sub-circuits that are evaluated separately, either using a quantum processor or a quantum simulator running on a classical supercomputer. Circuit compilation techniques that determine which qubits are simulated classically will greatly impact the system performance as well as provide a tradeoff between circuit reliability and runtime. We describe how classical supercomputing can aid unreliable quantum processors of intermediate size to solve large problem instances reliably. Wemore »advocate using a hybrid quantum-classical architecture where larger quantum circuits are broken into smaller sub-circuits that are evaluated separately, either using a quantum processor or a quantum simulator running on a classical supercomputer. Circuit compilation techniques that determine which qubits are simulated classically will greatly impact the system performance as well as provide a tradeoff between circuit reliability and runtime.« less
  2. We present VOQC, the first fully verified optimizer for quantum circuits, written using the Coq proof assistant. Quantum circuits are expressed as programs in a simple, low-level language called SQIR, a simple quantum intermediate representation, which is deeply embedded in Coq. Optimizations and other transformations are expressed as Coq functions, which are proved correct with respect to a semantics of SQIR programs. SQIR uses a semantics of matrices of complex numbers, which is the standard for quantum computation, but treats matrices symbolically in order to reason about programs that use an arbitrary number of quantum bits. SQIR's careful design andmore »our provided automation make it possible to write and verify a broad range of optimizations in VOQC, including full-circuit transformations from cutting-edge optimizers.« less
  3. —We describe how classical supercomputing can aid unreliable quantum processors of intermediate size to solve large problem instances reliably. We advocate using a hybrid quantum-classical architecture where larger quantum circuits are broken into smaller sub-circuits that are evaluated separately, either using a quantum processor or a quantum simulator running on a classical supercomputer. Circuit compilation techniques that determine which qubits are simulated classically will greatly impact the system performance as well as provide a tradeoff between circuit reliability and runtime.
  4. Generative modeling is a flavor of machine learning with applications ranging from computer vision to chemical design. It is expected to be one of the techniques most suited to take advantage of the additional resources provided by near-term quantum computers. Here, we implement a data-driven quantum circuit training algorithm on the canonical Bars-and-Stripes dataset using a quantum-classical hybrid machine. The training proceeds by running parameterized circuits on a trapped ion quantum computer and feeding the results to a classical optimizer. We apply two separate strategies, Particle Swarm and Bayesian optimization to this task. We show that the convergence of themore »quantum circuit to the target distribution depends critically on both the quantum hardware and classical optimization strategy. Our study represents the first successful training of a high-dimensional universal quantum circuit and highlights the promise and challenges associated with hybrid learning schemes.« less
  5. Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of thismore »conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH( a ) and per-int-NSETH( b ) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F 2 or the permanent of an n × n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3-ave-SBSETH( a ′ ) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates.« less