skip to main content


Title: Localized Phase for the Erdős–Rényi Graph
Abstract

We analyse the eigenvectors of the adjacency matrix of the Erdős–Rényi graph$${\mathbb {G}}(N,d/N)$$G(N,d/N)for$$\sqrt{\log N} \ll d \lesssim \log N$$logNdlogN. We show the existence of a localized phase, where each eigenvector is exponentially localized around a single vertex of the graph. This complements the completely delocalized phase previously established in Alt et al. (Commun Math Phys 388(1):507–579, 2021). For large enoughd, we establish a mobility edge by showing that the localized phase extends up to the boundary of the delocalized phase. We derive explicit asymptotics for the localization length up to the mobility edge and characterize its divergence near the phase boundary. The proof is based on a rigorous verification of Mott’s criterion for localization, comparing the tunnelling amplitude between localization centres with the eigenvalue spacing. The first main ingredient is a new family of global approximate eigenvectors, for which sharp enough estimates on the tunnelling amplitude can be established. The second main ingredient is a lower bound on the spacing of approximate eigenvalues. It follows from an anticoncentration result for these discrete random variables, obtained by a recursive application of a self-improving anticoncentration estimate due to Kesten.

 
more » « less
NSF-PAR ID:
10486679
Author(s) / Creator(s):
; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Communications in Mathematical Physics
Volume:
405
Issue:
1
ISSN:
0010-3616
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    We prove that$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$poly(t)·n1/D-depth local random quantum circuits with two qudit nearest-neighbor gates on aD-dimensional lattice withnqudits are approximatet-designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was$${{\,\textrm{poly}\,}}(t)\cdot n$$poly(t)·ndue to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$$D=1$$D=1. We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($${{\,\mathrm{\textsf{PH}}\,}}$$PH) is infinite and that certain counting problems are$$\#{\textsf{P}}$$#P-hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constant-depth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anti-concentration”, meaning roughly that the output has near-maximal entropy. Unitary 2-designs have the desired anti-concentration property. Our result improves the required depth for this level of anti-concentration from linear depth to a sub-linear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a two-dimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$O(\sqrt{n})$$O(n)depth suffices for anti-concentration. The proof is based on a previous construction oft-designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasi-orthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anti-concentration and to establish equivalence between these various norms for low-depth circuits. For random circuits with long-range gates, we use different methods to show that anti-concentration happens at circuit size$$O(n\ln ^2 n)$$O(nln2n)corresponding to depth$$O(\ln ^3 n)$$O(ln3n). We also show a lower bound of$$\Omega (n \ln n)$$Ω(nlnn)for the size of such circuit in this case. We also prove that anti-concentration is possible in depth$$O(\ln n \ln \ln n)$$O(lnnlnlnn)(size$$O(n \ln n \ln \ln n)$$O(nlnnlnlnn)) using a different model.

     
    more » « less
  2. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  3. By discretizing an argument of Kislyakov, Naor and Schechtman proved that the 1-Wasserstein metric over the planar grid{0,1,…<#comment/>,n}2\{0,1,\dots , n\}^2hasL1L_1-distortion bounded below by a constant multiple oflog⁡<#comment/>n\sqrt {\log n}. We provide a new “dimensionality” interpretation of Kislyakov’s argument, showing that if{Gn}n=1∞<#comment/>\{G_n\}_{n=1}^\inftyis a sequence of graphs whose isoperimetric dimension and Lipschitz-spectral dimension equal a common numberδ<#comment/>∈<#comment/>[2,∞<#comment/>)\delta \in [2,\infty ), then the 1-Wasserstein metric overGnG_nhasL1L_1-distortion bounded below by a constant multiple of(log⁡<#comment/>|Gn|)1δ<#comment/>(\log |G_n|)^{\frac {1}{\delta }}. We proceed to compute these dimensions for⊘<#comment/>\oslash-powers of certain graphs. In particular, we get that the sequence of diamond graphs{Dn}n=1∞<#comment/>\{\mathsf {D}_n\}_{n=1}^\inftyhas isoperimetric dimension and Lipschitz-spectral dimension equal to 2, obtaining as a corollary that the 1-Wasserstein metric overDn\mathsf {D}_nhasL1L_1-distortion bounded below by a constant multiple oflog⁡<#comment/>|Dn|\sqrt {\log | \mathsf {D}_n|}. This answers a question of Dilworth, Kutzarova, and Ostrovskii and exhibits only the third sequence ofL1L_1-embeddable graphs whose sequence of 1-Wasserstein metrics is notL1L_1-embeddable.

     
    more » « less
  4. Abstract

    Leptoquarks ($$\textrm{LQ}$$LQs) are hypothetical particles that appear in various extensions of the Standard Model (SM), that can explain observed differences between SM theory predictions and experimental results. The production of these particles has been widely studied at various experiments, most recently at the Large Hadron Collider (LHC), and stringent bounds have been placed on their masses and couplings, assuming the simplest beyond-SM (BSM) hypotheses. However, the limits are significantly weaker for$$\textrm{LQ}$$LQmodels with family non-universal couplings containing enhanced couplings to third-generation fermions. We present a new study on the production of a$$\textrm{LQ}$$LQat the LHC, with preferential couplings to third-generation fermions, considering proton-proton collisions at$$\sqrt{s} = 13 \, \textrm{TeV}$$s=13TeVand$$\sqrt{s} = 13.6 \, \textrm{TeV}$$s=13.6TeV. Such a hypothesis is well motivated theoretically and it can explain the recent anomalies in the precision measurements of$$\textrm{B}$$B-meson decay rates, specifically the$$R_{D^{(*)}}$$RD()ratios. Under a simplified model where the$$\textrm{LQ}$$LQmasses and couplings are free parameters, we focus on cases where the$$\textrm{LQ}$$LQdecays to a$$\tau $$τlepton and a$$\textrm{b}$$bquark, and study how the results are affected by different assumptions about chiral currents and interference effects with other BSM processes with the same final states, such as diagrams with a heavy vector boson,$$\textrm{Z}^{\prime }$$Z. The analysis is performed using machine learning techniques, resulting in an increased discovery reach at the LHC, allowing us to probe new physics phase space which addresses the$$\textrm{B}$$B-meson anomalies, for$$\textrm{LQ}$$LQmasses up to$$5.00\, \textrm{TeV}$$5.00TeV, for the high luminosity LHC scenario.

     
    more » « less
  5. Abstract

    Approximate integer programming is the following: For a given convex body$$K \subseteq {\mathbb {R}}^n$$KRn, either determine whether$$K \cap {\mathbb {Z}}^n$$KZnis empty, or find an integer point in the convex body$$2\cdot (K - c) +c$$2·(K-c)+cwhich isK, scaled by 2 from its center of gravityc. Approximate integer programming can be solved in time$$2^{O(n)}$$2O(n)while the fastest known methods for exact integer programming run in time$$2^{O(n)} \cdot n^n$$2O(n)·nn. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point$$x^* \in (K \cap {\mathbb {Z}}^n)$$x(KZn)can be found in time$$2^{O(n)}$$2O(n), provided that theremaindersof each component$$x_i^* \mod \ell $$ximodfor some arbitrarily fixed$$\ell \ge 5(n+1)$$5(n+1)of$$x^*$$xare given. The algorithm is based on acutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a$$2^{O(n)}n^n$$2O(n)nnalgorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a newasymmetric approximate Carathéodory theoremthat might be of interest on its own. Our second method concerns integer programming problems in equation-standard form$$Ax = b, 0 \le x \le u, \, x \in {\mathbb {Z}}^n$$Ax=b,0xu,xZn. Such a problem can be reduced to the solution of$$\prod _i O(\log u_i +1)$$iO(logui+1)approximate integer programming problems. This implies, for example thatknapsackorsubset-sumproblems withpolynomial variable range$$0 \le x_i \le p(n)$$0xip(n)can be solved in time$$(\log n)^{O(n)}$$(logn)O(n). For these problems, the best running time so far was$$n^n \cdot 2^{O(n)}$$nn·2O(n).

     
    more » « less