 Award ID(s):
 2016245
 NSFPAR ID:
 10424828
 Date Published:
 Journal Name:
 ACM Transactions on Quantum Computing
 Volume:
 3
 Issue:
 2
 ISSN:
 26436809
 Page Range / eLocation ID:
 1 to 28
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

null (Ed.)Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing finegrained versions of the noncollapse conjecture. Our first two conjectures poly3NSETH( a ) and perintNSETH( b ) take specific classical counting problems related to the number of zeros of a degree3 polynomial in n variables over F 2 or the permanent of an n × n integervalued matrix, and assert that any nondeterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3aveSBSETH( a ′ ) asserts a similar statement about averagecase algorithms living in the exponentialtime version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3NSETH(1/2) and perintNSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum PolynomialTime (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3aveSBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates.more » « less

null (Ed.)Quantum computers and simulators may offer significant advantages over their classical counterparts, providing insights into quantum manybody systems and possibly improving performance for solving exponentially hard problems, such as optimization and satisfiability. Here, we report the implementation of a lowdepth Quantum Approximate Optimization Algorithm (QAOA) using an analog quantum simulator. We estimate the groundstate energy of the Transverse Field Ising Model with longrange interactions with tunable range, and we optimize the corresponding combinatorial classical problem by sampling the QAOA output with highfidelity, singleshot, individual qubit measurements. We execute the algorithm with both an exhaustive search and closedloop optimization of the variational parameters, approximating the groundstate energy with up to 40 trappedion qubits. We benchmark the experiment with bootstrapping heuristic methods scaling polynomially with the system size. We observe, in agreement with numerics, that the QAOA performance does not degrade significantly as we scale up the system size and that the runtime is approximately independent from the number of qubits. We finally give a comprehensive analysis of the errors occurring in our system, a crucial step in the path forward toward the application of the QAOA to more general problem instances.more » « less

Abstract Combinatorial optimization problems on graphs have broad applications in science and engineering. The quantum approximate optimization algorithm (QAOA) is a method to solve these problems on a quantum computer by applying multiple rounds of variational circuits. However, there exist several challenges limiting the application of QAOA to realworld problems. In this paper, we demonstrate on a trappedion quantum computer that QAOA results improve with the number of rounds for multiple problems on several arbitrary graphs. We also demonstrate an advanced mixing Hamiltonian that allows sampling of all optimal solutions with predetermined weights. Our results are a step toward applying quantum algorithms to realworld problems.more » « less

Abstract In the nearterm, hybrid quantumclassical algorithms hold great potential for outperforming classical approaches. Understanding how these two computing paradigms work in tandem is critical for identifying areas where such hybrid algorithms could provide a quantum advantage. In this work, we study a QAOAbased quantum optimization approach by implementing the Variational Quantum Factoring (VQF) algorithm. We execute experimental demonstrations using a superconducting quantum processor, and investigate the trade off between quantum resources (number of qubits and circuit depth) and the probability that a given biprime is successfully factored. In our experiments, the integers 1099551473989, 3127, and 6557 are factored with 3, 4, and 5 qubits, respectively, using a QAOA ansatz with up to 8 layers and we are able to identify the optimal number of circuit layers for a given instance to maximize success probability. Furthermore, we demonstrate the impact of different noise sources on the performance of QAOA, and reveal the coherent error caused by the residual
Z Z coupling between qubits as a dominant source of error in a nearterm superconducting quantum processor. 
Abstract Many quantum algorithms are developed to evaluate eigenvalues for Hermitian matrices. However, few practical approach exists for the eigenanalysis of nonHermintian ones, such as arising from modern power systems. The main difficulty lies in the fact that, as the eigenvector matrix of a general matrix can be nonunitary, solving a general eigenvalue problem is inherently incompatible with existing unitarygatebased quantum methods. To fill this gap, this paper introduces a Variational Quantum Universal Eigensolver (VQUE), which is deployable on noisy intermediate scale quantum computers. Our new contributions include: (1) The first universal variational quantum algorithm capable of evaluating the eigenvalues of nonHermitian matrices—Inspired by Schur’s triangularization theory, VQUE unitarizes the eigenvalue problem to a procedure of searching unitary transformation matrices via quantum devices; (2) A Quantum Process Snapshot technique is devised to make VQUE maintain the potential quantum advantage inherited from the original variational quantum eigensolver—With additional
quantum gates, this method efficiently identifies whether a unitary operator is triangular with respect to a given basis; (3) Successful deployment and validation of VQUE on a real noisy quantum computer, which demonstrates the algorithm’s feasibility. We also undertake a comprehensive parametric study to validate VQUE’s scalability, generality, and performance in realistic applications.$$O(log_{2}{N})$$ $O\left(lo{g}_{2}N\right)$