skip to main content


Search for: All records

Award ID contains: 1729369

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We prove that$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$poly(t)·n1/D-depth local random quantum circuits with two qudit nearest-neighbor gates on aD-dimensional lattice withnqudits are approximatet-designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was$${{\,\textrm{poly}\,}}(t)\cdot n$$poly(t)·ndue to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$$D=1$$D=1. We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($${{\,\mathrm{\textsf{PH}}\,}}$$PH) is infinite and that certain counting problems are$$\#{\textsf{P}}$$#P-hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$O(\sqrt{n})$$O(n)depth suffices for anti-concentration. The proof is based on a previous construction oft-designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size$$O(n\ln ^2 n)$$O(nln2n)corresponding to depth$$O(\ln ^3 n)$$O(ln3n). We also show a lower bound of$$\Omega (n \ln n)$$Ω(nlnn)for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$O(\ln n \ln \ln n)$$O(lnnlnlnn)(size$$O(n \ln n \ln \ln n)$$O(nlnnlnlnn)) using a different model.

     
    more » « less
  2. We propose a novel deterministic method for preparing arbitrary quantum states. When our protocol is compiled into CNOT and arbitrary single-qubit gates, it prepares anN-dimensional state in depthO(log(N))andspacetime allocation(a metric that accounts for the fact that oftentimes some ancilla qubits need not be active for the entire circuit)O(N), which are both optimal. When compiled into the{H,S,T,CNOT}gate set, we show that it requires asymptotically fewer quantum resources than previous methods. Specifically, it prepares an arbitrary state up to errorϵwith optimal depth ofO(log(N)+log(1/ϵ))and spacetime allocationO(Nlog(log(N)/ϵ)), improving overO(log(N)log(log(N)/ϵ))andO(Nlog(N/ϵ)), respectively. We illustrate how the reduced spacetime allocation of our protocol enables rapid preparation of many disjoint states with only constant-factor ancilla overhead –O(N)ancilla qubits are reused efficiently to prepare a product state ofwN-dimensional states in depthO(w+log(N))rather thanO(wlog(N)), achieving effectively constant depth per state. We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations. We provide quantum circuit descriptions of our protocol, detailed pseudocode, and gate-level implementation examples using Braket.

     
    more » « less
    Free, publicly-accessible full text available February 15, 2025
  3. Free, publicly-accessible full text available February 1, 2025
  4. Contemporary quantum computers encode and process quantum information in binary qubits (d = 2). How- ever, many architectures include higher energy levels that are left as unused computational resources. We demonstrate a superconducting ququart (d = 4) processor and combine quantum optimal control with efficient gate decompositions to implement high-fidelity ququart gates. We distinguish between viewing the ququart as a generalized four-level qubit and an encoded pair of qubits, and characterize the resulting gates in each case. In randomized benchmarking experiments we observe gate fidelities 95% and identify coherence as the primary limiting factor. Our results validate ququarts as a viable tool for quantum information processing. 
    more » « less
  5. Cloud-based quantum computers have become a re- ality with a number of companies allowing for cloud-based access to their machines with tens to more than 100 qubits. With easy access to quantum computers, quantum information processing will potentially revolutionize computation, and superconducting transmon-based quantum computers are among some of the more promising devices available. Cloud service providers today host a variety of these and other prototype quantum computers with highly diverse device properties, sizes, and performances. The variation that exists in today’s quantum computers, even among those of the same underlying hardware, motivate the study of how one device can be clearly differentiated and identified from the next. As a case study, this work focuses on the properties of 25 IBM superconducting, fixed-frequency transmon-based quantum computers that range in age from a few months to approximately 2.5 years. Through the analysis of current and historical quantum computer calibration data, this work uncovers key features within the machines, primarily frequency characteristics of transmon qubits, that can serve as a basis for a unique hardware fingerprint of each quantum computer. 
    more » « less
  6. We present a quantum compilation algorithm that maps Clifford encoders, an equivalence class of quantum circuits that arise universally in quantum error correction, into a representation in the ZX calculus. In particular, we develop a canonical form in the ZX calculus and prove canonicity as well as efficient reducibility of any Clifford encoder into the canonical form. The diagrams produced by our compiler explicitly visualize information propagation and entanglement structure of the encoder, revealing properties that may be obscured in the circuit or stabilizer-tableau representation 
    more » « less
  7. Quantum Computing has attracted much research attention because of its potential to achieve fundamental speed and efficiency improvements in various domains. Among different quantum algorithms, Parameterized Quantum Circuits (PQC) for Quantum Machine Learning (QML) show promises to realize quantum advantages on the current Noisy Intermediate-Scale Quantum (NISQ) Machines. Therefore, to facilitate the QML and PQC research, a recent python library called TorchQuantum has been released. It can construct, simulate, and train PQC for machine learning tasks with high speed and convenient debugging supports. Besides quantum for ML, we want to raise the community's attention on the reversed direction: ML for quantum. Specifically, the TorchQuantum library also supports using data-driven ML models to solve problems in quantum system research, such as predicting the impact of quantum noise on circuit fidelity and improving the quantum circuit compilation efficiency. This paper presents a case study of the ML for quantum part in TorchQuantum. Since estimating the noise impact on circuit reliability is an essential step toward understanding and mitigating noise, we propose to leverage classical ML to predict noise impact on circuit fidelity. Inspired by the natural graph representation of quantum circuits, we propose to leverage a graph transformer model to predict the noisy circuit fidelity. We firstly collect a large dataset with a variety of quantum circuits and obtain their fidelity on noisy simulators and real machines. Then we embed each circuit into a graph with gate and noise properties as node features, and adopt a graph transformer to predict the fidelity. We can avoid exponential classical simulation cost and efficiently estimate fidelity with polynomial complexity. Evaluated on 5 thousand random and algorithm circuits, the graph transformer predictor can provide accurate fidelity estimation with RMSE error 0.04 and outperform a simple neural network-based model by 0.02 on average. It can achieve 0.99 and 0.95 R2 scores for random and algorithm circuits, respectively. Compared with circuit simulators, the predictor has over 200× speedup for estimating the fidelity. The datasets and predictors can be accessed in the TorchQuantum library. 
    more » « less
  8. Recent work shows that quantum signal processing (QSP) and its multi-qubit lifted version, quantum singular value transformation (QSVT), unify and improve the presentation of most quantum algorithms. QSP/QSVT characterize the ability, by alternating ansätze, to obliviously transform the singular values of subsystems of unitary matrices by polynomial functions; these algorithms are numerically stable and analytically well-understood. That said, QSP/QSVT require consistent access to asingleoracle, saying nothing about computingjoint propertiesof two or more oracles; these can be far cheaper to determine given an ability to pit oracles against one another coherently. This work introduces a corresponding theory of QSP over multiple variables: M-QSP. Surprisingly, despite the non-existence of the fundamental theorem of algebra for multivariable polynomials, there exist necessary and sufficient conditions under which a desiredstablemultivariable polynomial transformation is possible. Moreover, the classical subroutines used by QSP protocols survive in the multivariable setting for non-obvious reasons, and remain numerically stable and efficient. Up to a well-defined conjecture, we give proof that the family of achievable multivariable transforms is as loosely constrained as could be expected. The unique ability of M-QSP toobliviouslyapproximatejoint functionsof multiple variables coherently leads to novel speedups incommensurate with those of other quantum algorithms, and provides a bridge from quantum algorithms to algebraic geometry.

     
    more » « less