skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 16 until 2:00 AM ET on Friday, January 17 due to maintenance. We apologize for the inconvenience.


Title: Fractional Certificates for Bounded Functions
A folklore conjecture in quantum computing is that the acceptance probability of a quantum query algorithm can be approximated by a classical decision tree, with only a polynomial increase in the number of queries. Motivated by this conjecture, Aaronson and Ambainis (Theory of Computing, 2014) conjectured that this should hold more generally for any bounded function computed by a low degree polynomial. In this work we prove two new results towards establishing this conjecture: first, that any such polynomial has a small fractional certificate complexity; and second, that many inputs have a small sensitive block. We show that these would imply the Aaronson and Ambainis conjecture, assuming a conjectured extension of Talagrand’s concentration inequality. On the technical side, many classical techniques used in the analysis of Boolean functions seem to fail when applied to bounded functions. Here, we develop a new technique, based on a mix of combinatorics, analysis and geometry, and which in part extends a recent technique of Knop et al. (STOC 2021) to bounded functions.  more » « less
Award ID(s):
2006443
PAR ID:
10520228
Author(s) / Creator(s):
;
Editor(s):
Tauman_Kalai, Yael
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Volume:
251
ISSN:
1868-8969
ISBN:
978-3-95977-263-1
Page Range / eLocation ID:
251-251
Subject(s) / Keyword(s):
Aaronson-Ambainis conjecture fractional block sensitivity Talagrand inequality Theory of computation → Randomness, geometry and discrete structures Mathematics of computing → Discrete mathematics
Format(s):
Medium: X Size: 13 pages; 709653 bytes Other: application/pdf
Size(s):
13 pages 709653 bytes
Right(s):
Creative Commons Attribution 4.0 International license; info:eu-repo/semantics/openAccess
Sponsoring Org:
National Science Foundation
More Like this
  1. Ta-Shma, Amnon (Ed.)
    We study the fundamental challenge of exhibiting explicit functions that have small correlation with low-degree polynomials over 𝔽₂. Our main contributions include: 1) In STOC 2020, CHHLZ introduced a new technique to prove correlation bounds. Using their technique they established new correlation bounds for low-degree polynomials. They conjectured that their technique generalizes to higher degree polynomials as well. We give a counterexample to their conjecture, in fact ruling out weaker parameters and showing what they prove is essentially the best possible. 2) We propose a new approach for proving correlation bounds with the central "mod functions," consisting of two steps: (I) the polynomials that maximize correlation are symmetric and (II) symmetric polynomials have small correlation. Contrary to related results in the literature, we conjecture that (I) is true. We argue this approach is not affected by existing "barrier results." 3) We prove our conjecture for quadratic polynomials. Specifically, we determine the maximum possible correlation between quadratic polynomials modulo 2 and the functions (x_1,… ,x_n) → z^{∑ x_i} for any z on the complex unit circle, and show that it is achieved by symmetric polynomials. To obtain our results we develop a new proof technique: we express correlation in terms of directional derivatives and analyze it by slowly restricting the direction. 4) We make partial progress on the conjecture for cubic polynomials, in particular proving tight correlation bounds for cubic polynomials whose degree-3 part is symmetric. 
    more » « less
  2. Let f: {0, 1}n → {0, 1} be a boolean function, and let f∧(x, y) = f(x ∧ y) denote the AND-function of f, where x ∧ y denotes bit-wise AND. We study the deterministic communication complexity of f∧ and show that, up to a logn factor, it is bounded by a polynomial in the logarithm of the real rank of the communication matrix of f∧. This comes within a logn factor of establishing the log-rank conjecture for AND-functions with no assumptions on f. Our result stands in contrast with previous results on special cases of the log-rank conjecture, which needed significant restrictions on f such as monotonicity or low F2-degree. Our techniques can also be used to prove (within a logn factor) a lifting theorem for AND-functions, stating that the deterministic communication complexity of f∧ is polynomially related to the AND-decision tree complexity of f. The results rely on a new structural result regarding boolean functions f: {0, 1}n → {0, 1} with a sparse polynomial representation, which may be of independent interest. We show that if the polynomial computing f has few monomials then the set system of the monomials has a small hitting set, of size poly-logarithmic in its sparsity. We also establish extensions of this result to multi-linear polynomials f: {0, 1}n → with a larger range. 
    more » « less
  3. Despite fundamental interests in learning quantum circuits, the existence of a computationally efficient algorithm for learning shallow quantum circuits remains an open question. Because shallow quantum circuits can generate distributions that are classically hard to sample from, existing learning algorithms do not apply. In this work, we present a polynomial-time classical algorithm for learning the description of any unknown 𝑛-qubit shallow quantum circuit 𝑈 (with arbitrary unknown architecture) within a small diamond distance using single-qubit measurement data on the output states of 𝑈. We also provide a polynomial-time classical algorithm for learning the description of any unknown 𝑛-qubit state |𝜓⟩ = 𝑈|0^𝑛⟩ prepared by a shallow quantum circuit 𝑈 (on a 2D lattice) within a small trace distance using single-qubit measurements on copies of |𝜓⟩. Our approach uses a quantum circuit representation based on local inversions and a technique to combine these inversions. This circuit representation yields an optimization landscape that can be efficiently navigated and enables efficient learning of quantum circuits that are classically hard to simulate. 
    more » « less
  4. null (Ed.)
    Quantum computational supremacy arguments, which describe a way for a quantum computer to perform a task that cannot also be done by a classical computer, typically require some sort of computational assumption related to the limitations of classical computation. One common assumption is that the polynomial hierarchy ( P H ) does not collapse, a stronger version of the statement that P ≠ N P , which leads to the conclusion that any classical simulation of certain families of quantum circuits requires time scaling worse than any polynomial in the size of the circuits. However, the asymptotic nature of this conclusion prevents us from calculating exactly how many qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers. We refine these quantum computational supremacy arguments and perform such a calculation by imposing fine-grained versions of the non-collapse conjecture. Our first two conjectures poly3-NSETH( a ) and per-int-NSETH( b ) take specific classical counting problems related to the number of zeros of a degree-3 polynomial in n variables over F 2 or the permanent of an n × n integer-valued matrix, and assert that any non-deterministic algorithm that solves them requires 2 c n time steps, where c ∈ { a , b } . A third conjecture poly3-ave-SBSETH( a ′ ) asserts a similar statement about average-case algorithms living in the exponential-time version of the complexity class S B P . We analyze evidence for these conjectures and argue that they are plausible when a = 1 / 2 , b = 0.999 and a ′ = 1 / 2 .Imposing poly3-NSETH(1/2) and per-int-NSETH(0.999), and assuming that the runtime of a hypothetical quantum circuit simulation algorithm would scale linearly with the number of gates/constraints/optical elements, we conclude that Instantaneous Quantum Polynomial-Time (IQP) circuits with 208 qubits and 500 gates, Quantum Approximate Optimization Algorithm (QAOA) circuits with 420 qubits and 500 constraints and boson sampling circuits (i.e. linear optical networks) with 98 photons and 500 optical elements are large enough for the task of producing samples from their output distributions up to constant multiplicative error to be intractable on current technology. Imposing poly3-ave-SBSETH(1/2), we additionally rule out simulations with constant additive error for IQP and QAOA circuits of the same size. Without the assumption of linearly increasing simulation time, we can make analogous statements for circuits with slightly fewer qubits but requiring 10 4 to 10 7 gates. 
    more » « less
  5. Abstract We study three integrals related to the celebrated pair correlation conjecture of H. L. Montgomery. The first is the integral of Montgomery’s function F ⁢ ( α , T ) {F(\alpha,T)} in bounded intervals, the second is an integral introduced by Selberg related to estimating the variance of primes in short intervals, and the last is the second moment of the logarithmic derivative of the Riemann zeta-function near the critical line. The conjectured asymptotic for any of these three integrals is equivalent to Montgomery’s pair correlation conjecture. Assuming the Riemann hypothesis, we substantially improve the known upper and lower bounds for these integrals by introducing new connections to certain extremal problems in Fourier analysis. In an appendix, we study the intriguing problem of establishing the sharp form of an embedding between two Hilbert spaces of entire functions naturally connected to Montgomery’s pair correlation conjecture. 
    more » « less