skip to main content


Title: Linear growth of quantum circuit complexity
Abstract The complexity of quantum states has become a key quantity of interest across various subfields of physics, from quantum computing to the theory of black holes. The evolution of generic quantum systems can be modelled by considering a collection of qubits subjected to sequences of random unitary gates. Here we investigate how the complexity of these random quantum circuits increases by considering how to construct a unitary operation from Haar-random two-qubit quantum gates. Implementing the unitary operation exactly requires a minimal number of gates—this is the operation’s exact circuit complexity. We prove a conjecture that this complexity grows linearly, before saturating when the number of applied gates reaches a threshold that grows exponentially with the number of qubits. Our proof overcomes difficulties in establishing lower bounds for the exact circuit complexity by combining differential topology and elementary algebraic geometry with an inductive construction of Clifford circuits.  more » « less
Award ID(s):
2116679
NSF-PAR ID:
10333805
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Nature Physics
Volume:
18
Issue:
5
ISSN:
1745-2473
Page Range / eLocation ID:
528 to 532
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We prove that$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$poly(t)·n1/D-depth local random quantum circuits with two qudit nearest-neighbor gates on aD-dimensional lattice withnqudits are approximatet-designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was$${{\,\textrm{poly}\,}}(t)\cdot n$$poly(t)·ndue to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$$D=1$$D=1. We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($${{\,\mathrm{\textsf{PH}}\,}}$$PH) is infinite and that certain counting problems are$$\#{\textsf{P}}$$#P-hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$O(\sqrt{n})$$O(n)depth suffices for anti-concentration. The proof is based on a previous construction oft-designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size$$O(n\ln ^2 n)$$O(nln2n)corresponding to depth$$O(\ln ^3 n)$$O(ln3n). We also show a lower bound of$$\Omega (n \ln n)$$Ω(nlnn)for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$O(\ln n \ln \ln n)$$O(lnnlnlnn)(size$$O(n \ln n \ln \ln n)$$O(nlnnlnlnn)) using a different model.

     
    more » « less
  2. The current phase of quantum computing is in the Noisy Intermediate-Scale Quantum (NISQ) era. On NISQ devices, two-qubit gates such as CNOTs are much noisier than single-qubit gates, so it is essential to minimize their count. Quantum circuit synthesis is a process of decomposing an arbitrary unitary into a sequence of quantum gates, and can be used as an optimization tool to produce shorter circuits to improve overall circuit fidelity. However, the time-to-solution of synthesis grows exponentially with the number of qubits. As a result, synthesis is intractable for circuits on a large qubit scale. In this paper, we propose a hierarchical, block-by-block opti-mization framework, QGo, for quantum circuit optimization. Our approach allows an exponential cost optimization to scale to large circuits. QGo uses a combination of partitioning and synthesis: 1) partition the circuit into a sequence of independent circuit blocks; 2) re-generate and optimize each block using quantum synthesis; and 3) re-compose the final circuit by stitching all the blocks together. We perform our analysis and show the fidelity improvements in three different regimes: small-size circuits on real devices, medium-size circuits on noisy simulations, and large-size circuits on analytical models. Our technique can be applied after existing optimizations to achieve higher circuit fidelity. Using a set of NISQ benchmarks, we show that QGo can reduce the number of CNOT gates by 29.9% on average and up to 50% when compared with industrial compiler optimizations such as t|ket). When executed on the IBM Athens system, shorter depth leads to higher circuit fidelity. We also demonstrate the scalability of our QGo technique to optimize circuits of 60+ qubits, Our technique is the first demonstration of successfully employing and scaling synthesis in the compilation tool chain for large circuits. Overall, our approach is robust for direct incorporation in production compiler toolchains to further improve the circuit fidelity. 
    more » « less
  3. A basic question in the theory of fault-tolerant quantum computation is to understand the fundamental resource costs for performing a universal logical set of gates on encoded qubits to arbitrary accuracy. Here we consider qubits encoded with constant space overhead (i.e. finite encoding rate) in the limit of arbitrarily large code distance d through the use of topological codes associated to triangulations of hyperbolic surfaces. We introduce explicit protocols to demonstrate how Dehn twists of the hyperbolic surface can be implemented on the code through constant depth unitary circuits, without increasing the space overhead. The circuit for a given Dehn twist consists of a permutation of physical qubits, followed by a constant depth local unitary circuit, where locality here is defined with respect to a hyperbolic metric that defines the code. Applying our results to the hyperbolic Fibonacci Turaev-Viro code implies the possibility of applying universal logical gate sets on encoded qubits through constant depth unitary circuits and with constant space overhead. Our circuits are inherently protected from errors as they map local operators to local operators while changing the size of their support by at most a constant factor; in the presence of noisy syndrome measurements, our results suggest the possibility of universal fault tolerant quantum computation with constant space overhead and time overhead of O ( d / log ⁡ d ) . For quantum circuits that allow parallel gate operations, this yields the optimal scaling of space-time overhead known to date. 
    more » « less
  4. null (Ed.)
    Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH( a ) and per-int-NSETH( b ) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F 2 or the permanent of an n × n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3-ave-SBSETH( a ′ ) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates. 
    more » « less
  5. Variational Quantum Algorithms (VQAs) rely upon the iterative optimization of a parameterized unitary circuit with respect to an objective function. Since quantum machines are noisy and expensive resources, it is imperative to choose a VQA's ansatz appropriately and its initial parameters to be close to optimal. This work tackles the problem of finding initial ansatz parameters by proposing CAFQA, a Clifford ansatz for quantum accuracy. The CAFQA ansatz is a hardware-efficient circuit built with only Clifford gates. In this ansatz, the initial parameters for the tunable gates are chosen by searching efficiently through the Clifford parameter space via classical simulation, thereby producing a suitable stabilizer state. The stabilizer states produced are shown to always equal or outperform traditional classical initialization (e.g., Hartree-Fock), and often produce high accuracy estimations prior to quantum exploration. Furthermore, the technique is classically suited since a) Clifford circuits can be exactly simulated classically in polynomial time and b) the discrete Clifford space, while scaling exponentially in the number of qubits, is searched efficiently via Bayesian Optimization. For the Variational Quantum Eigensolver (VQE) task of molecular ground state energy estimation up to 20 qubits, CAFQA's Clifford Ansatz achieves a mean accuracy of near 99%, recovering as much as 99.99% of the correlation energy over Hartree-Fock. Notably, the scalability of the approach allows for preliminary ground state energy estimation of the challenging Chromium dimer with an accuracy greater than Hartree-Fock. With CAFQA's initialization, VQA convergence is accelerated by a factor of 2.5x. In all, this work shows that stabilizer states are an accurate ansatz initialization for VQAs. Furthermore, it highlights the potential for quantum-inspired classical techniques to support VQAs. 
    more » « less