In the nearterm, hybrid quantumclassical algorithms hold great potential for outperforming classical approaches. Understanding how these two computing paradigms work in tandem is critical for identifying areas where such hybrid algorithms could provide a quantum advantage. In this work, we study a QAOAbased quantum optimization approach by implementing the Variational Quantum Factoring (VQF) algorithm. We execute experimental demonstrations using a superconducting quantum processor, and investigate the trade off between quantum resources (number of qubits and circuit depth) and the probability that a given biprime is successfully factored. In our experiments, the integers 1099551473989, 3127, and 6557 are factored with 3, 4, and 5 qubits, respectively, using a QAOA ansatz with up to 8 layers and we are able to identify the optimal number of circuit layers for a given instance to maximize success probability. Furthermore, we demonstrate the impact of different noise sources on the performance of QAOA, and reveal the coherent error caused by the residual
Realizing the potential of nearterm quantum computers to solve industryrelevant constrainedoptimization problems is a promising path to quantum advantage. In this work, we consider the extractive summarization constrainedoptimization problem and demonstrate the largesttodate execution of a quantum optimization algorithm that natively preserves constraints on quantum hardware. We report results with the Quantum Alternating Operator Ansatz algorithm with a Hammingweightpreserving XY mixer (XYQAOA) on trappedion quantum computer. We successfully execute XYQAOA circuits that restrict the quantum evolution to the inconstraint subspace, using up to 20 qubits and a twoqubit gate depth of up to 159. We demonstrate the necessity of directly encoding the constraints into the quantum circuit by showing the tradeoff between the inconstraint probability and the quality of the solution that is implicit if unconstrained quantum optimization methods are used. We show that this tradeoff makes choosing good parameters difficult in general. We compare XYQAOA to the Layer Variational Quantum Eigensolver algorithm, which has a highly expressive constantdepth circuit, and the Quantum Approximate Optimization Algorithm. We discuss the respective tradeoffs of the algorithms and implications for their execution on nearterm quantum hardware.
more » « less Award ID(s):
 2120757
 NSFPAR ID:
 10373700
 Publisher / Repository:
 Nature Publishing Group
 Date Published:
 Journal Name:
 Scientific Reports
 Volume:
 12
 Issue:
 1
 ISSN:
 20452322
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract Z Z coupling between qubits as a dominant source of error in a nearterm superconducting quantum processor. 
Abstract The quantum approximate optimization algorithm (QAOA) is an approach for nearterm quantum computers to potentially demonstrate computational advantage in solving combinatorial optimization problems. However, the viability of the QAOA depends on how its performance and resource requirements scale with problem size and complexity for realistic hardware implementations. Here, we quantify scaling of the expected resource requirements by synthesizing optimized circuits for hardware architectures with varying levels of connectivity. Assuming noisy gate operations, we estimate the number of measurements needed to sample the output of the idealized QAOA circuit with high probability. We show the number of measurements, and hence total time to solution, grows exponentially in problem size and problem graph degree as well as depth of the QAOA ansatz, gate infidelities, and inverse hardware graph degree. These problems may be alleviated by increasing hardware connectivity or by recently proposed modifications to the QAOA that achieve higher performance with fewer circuit layers.more » « less

null (Ed.)Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing finegrained versions of the noncollapse conjecture. Our first two conjectures poly3NSETH( a ) and perintNSETH( b ) take specific classical counting problems related to the number of zeros of a degree3 polynomial in n variables over F 2 or the permanent of an n × n integervalued matrix, and assert that any nondeterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3aveSBSETH( a ′ ) asserts a similar statement about averagecase algorithms living in the exponentialtime version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3NSETH(1/2) and perintNSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum PolynomialTime (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3aveSBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates.more » « less

The quantum approximate optimization algorithm (QAOA) is a nearterm hybrid algorithm intended to solve combinatorial optimization problems, such as MaxCut. QAOA can be made to mimic an adiabatic schedule, and in the p → ∞ limit the final state is an exact maximal eigenstate in accordance with the adiabatic theorem. In this work, the connection between QAOA and adiabaticity is made explicit by inspecting the regime of p large but finite. By connecting QAOA to counterdiabatic (CD) evolution, we construct CDQAOA angles which mimic a counterdiabatic schedule by matching Trotter "error" terms to approximate adiabatic gauge potentials which suppress diabatic excitations arising from finite ramp speed. In our construction, these "error" terms are helpful, not detrimental, to QAOA. Using this matching to link QAOA with quantum adiabatic algorithms (QAA), we show that the approximation ratio converges to one at least as 1 − C ( p ) ∼ 1 / p μ . We show that transfer of parameters between graphs, and interpolating angles for p + 1 given p are both natural byproducts of CDQAOA matching. Optimization of CDQAOA angles is equivalent to optimizing a continuous adiabatic schedule. Finally, we show that, using a property of variational adiabatic gauge potentials, QAOA is at least counterdiabatic, not just adiabatic, and has better performance than finite time adiabatic evolution. We demonstrate the method on three examples: a 2 level system, an Ising chain, and the MaxCut problem.more » « less

Despite rapid advances in quantum computing technologies, the qubit connectivity limitation remains to be a critical challenge. Both nearterm NISQ quantum computers and relatively longterm scalable quantum architectures do not offer full connectivity. As a result, quantum circuits may not be directly executed on quantum hardware, and a quantum compiler needs to perform qubit routing to make the circuit compatible with the device layout. During the qubit routing step, the compiler inserts SWAP gates and performs circuit transformations. Given the connectivity topology of the target hardware, there are typically multiple qubit routing candidates. The stateoftheart compilers use a cost function to evaluate the number of SWAP gates for different routes and then select the one with the minimum number of SWAP gates. After qubit routing, the quantum compiler performs gate optimizations upon the circuit with the newly inserted SWAP gates. In this paper, we observe that the aforementioned qubit routing is not optimal, and qubit routing should not be independent on subsequent gate optimizations. We find that with the consideration of gate optimizations, not all of the SWAP gates have the same basisgate cost. These insights lead to the development of our qubit routing algorithm, NASSC (Not All Swaps have the Same Cost). NASSC is the first algorithm that considers the subsequent optimizations during the routing step. Our optimizationaware qubit routing leads to better routing decisions and benefits subsequent optimizations. We also propose a new optimizationaware decomposition for the inserted SWAP gates. Our experiments show that the routing overhead compiled with our routing algorithm is reduced by up to 69.30% (21.30% on average) in the number of CNOT gates and up to 43.50% (7.61% on average) in the circuit depth compared with the stateoftheart scheme, SABRE.more » « less