We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gatelevel error with probability close to one. We model noise by adding a pair of weak, unital, singlequbit noise channels after each twoqubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in
Applying
 NSFPAR ID:
 10378886
 Publisher / Repository:
 Springer Science + Business Media
 Date Published:
 Journal Name:
 Theory of Computing Systems
 Volume:
 67
 Issue:
 1
 ISSN:
 14324350
 Page Range / eLocation ID:
 p. 149177
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ ${p}_{\text{noisy}}$ shrink exponentially with the expected number of gatelevel errors. Specifically, the linear crossentropy benchmark$$p_{\text {ideal}}$$ ${p}_{\text{ideal}}$F that measures this correlation behaves as , where$$F=\text {exp}(2s\epsilon \pm O(s\epsilon ^2))$$ $F=\text{exp}(2s\u03f5\pm O\left(s{\u03f5}^{2}\right))$ is the probability of error per circuit location and$$\epsilon $$ $\u03f5$s is the number of twoqubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ ${p}_{\text{noisy}}$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ ${p}_{\text{unif}}$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1F)p_{\text {unif}}$$ ${p}_{\text{noisy}}\approx F{p}_{\text{ideal}}+(1F){p}_{\text{unif}}$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1F$$ $1F$ . Thus, the “whitenoise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ $O\left(F\u03f5\sqrt{s}\right)$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ $\u03f5\sqrt{s}\ll 1$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ $\u03f5s\ll 1$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ $s\ge \Omega (nlog(n\left)\right)$logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{1} \ge {\tilde{\Omega }}(n)$$ ${\u03f5}^{1}\ge \stackrel{~}{\Omega}\left(n\right)$F decays. The whitenoise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexitytheoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from secondmoment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. 
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in finegrained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ $\stackrel{~}{O}\left(n\right)$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ time).$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$Counting the number of
k cliques with total edge weight equal to zero in ann node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ $\stackrel{~}{O}\left({n}^{\lceil k/2\rceil}\right)$ ). For odd$$k\ge 3$$ $k\ge 3$k , this bound can be further improved for sparse graphs: for example, counting the number of zeroweight triangles in anm edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ $\stackrel{~}{O}\left(m\right)$k cliques in unweighted graphs, and had worse running times for smallk .Computing the AllPairs Shortest Distances matrix for an
n node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ $\stackrel{~}{O}\left({n}^{2}\right)$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ $\Omega \left({n}^{2}\right)$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ $\stackrel{~}{O}\left({n}^{2.94}\right)$Certifying that an
n variablek CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2  n/O(k)}$$ ${2}^{n/2n/O\left(k\right)}$ time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ ${2}^{n/2}\xb7\text{poly}\left(n\right)$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ $\#$k UNSAT running in time. Therefore we have to exploit nonalgebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ ${2}^{n/2}/{n}^{\omega \left(1\right)}$ Due to the centrality of these problems in finegrained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution toCertifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ ${2}^{4n/5}\xb7\text{poly}\left(n\right)$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ ${2}^{2n/3}\xb7\text{poly}\left(n\right)$n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ ${2}^{n/3}\xb7\text{poly}\left(n\right)$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ ${2}^{0.49991n}\xb7\text{poly}\left(n\right)$ 
For each odd integer
$n \geq 3$ , we construct a rank3 graph$\Lambda _n$ with involution$\gamma _n$ whose real$C^*$ algebra$C^*_{\scriptscriptstyle \mathbb {R}}(\Lambda _n, \gamma _n)$ is stably isomorphic to the exotic Cuntz algebra$\mathcal E_n$ . This construction is optimal, as we prove that a rank2 graph with involution$(\Lambda ,\gamma )$ can never satisfy$C^*_{\scriptscriptstyle \mathbb {R}}(\Lambda , \gamma )\sim _{ME} \mathcal E_n$ , and Boersema reached the same conclusion for rank1 graphs (directed graphs) in [Münster J. Math.10 (2017), pp. 485–521, Corollary 4.3]. Our construction relies on a rank1 graph with involution$(\Lambda , \gamma )$ whose real$C^*$ algebra$C^*_{\scriptscriptstyle \mathbb {R}}(\Lambda , \gamma )$ is stably isomorphic to the suspension$S \mathbb {R}$ . In the Appendix, we show that the$i$ fold suspension$S^i \mathbb {R}$ is stably isomorphic to a graph algebra iff$2 \leq i \leq 1$ . 
Abstract The free multiplicative Brownian motion
is the large$$b_{t}$$ ${b}_{t}$N limit of the Brownian motion on in the sense of$$\mathsf {GL}(N;\mathbb {C}),$$ $\mathrm{GL}(N\u037eC),$ distributions. The natural candidate for the large$$*$$ $\ast $N limit of the empirical distribution of eigenvalues is thus the Brown measure of . In previous work, the second and third authors showed that this Brown measure is supported in the closure of a region$$b_{t}$$ ${b}_{t}$ that appeared in the work of Biane. In the present paper, we compute the Brown measure completely. It has a continuous density$$\Sigma _{t}$$ ${\Sigma}_{t}$ on$$W_{t}$$ ${W}_{t}$ which is strictly positive and real analytic on$$\overline{\Sigma }_{t},$$ ${\overline{\Sigma}}_{t},$ . This density has a simple form in polar coordinates:$$\Sigma _{t}$$ ${\Sigma}_{t}$ where$$\begin{aligned} W_{t}(r,\theta )=\frac{1}{r^{2}}w_{t}(\theta ), \end{aligned}$$ $\begin{array}{c}{W}_{t}(r,\theta )=\frac{1}{{r}^{2}}{w}_{t}\left(\theta \right),\end{array}$ is an analytic function determined by the geometry of the region$$w_{t}$$ ${w}_{t}$ . We show also that the spectral measure of free unitary Brownian motion$$\Sigma _{t}$$ ${\Sigma}_{t}$ is a “shadow” of the Brown measure of$$u_{t}$$ ${u}_{t}$ , precisely mirroring the relationship between the circular and semicircular laws. We develop several new methods, based on stochastic differential equations and PDE, to prove these results.$$b_{t}$$ ${b}_{t}$ 
Abstract Let us fix a prime
p and a homogeneous system ofm linear equations for$$a_{j,1}x_1+\dots +a_{j,k}x_k=0$$ ${a}_{j,1}{x}_{1}+\cdots +{a}_{j,k}{x}_{k}=0$ with coefficients$$j=1,\dots ,m$$ $j=1,\cdots ,m$ . Suppose that$$a_{j,i}\in \mathbb {F}_p$$ ${a}_{j,i}\in {F}_{p}$ , that$$k\ge 3m$$ $k\ge 3m$ for$$a_{j,1}+\dots +a_{j,k}=0$$ ${a}_{j,1}+\cdots +{a}_{j,k}=0$ and that every$$j=1,\dots ,m$$ $j=1,\cdots ,m$ minor of the$$m\times m$$ $m\times m$ matrix$$m\times k$$ $m\times k$ is nonsingular. Then we prove that for any (large)$$(a_{j,i})_{j,i}$$ ${\left({a}_{j,i}\right)}_{j,i}$n , any subset of size$$A\subseteq \mathbb {F}_p^n$$ $A\subseteq {F}_{p}^{n}$ contains a solution$$A> C\cdot \Gamma ^n$$ $\leftA\right>C\xb7{\Gamma}^{n}$ to the given system of equations such that the vectors$$(x_1,\dots ,x_k)\in A^k$$ $({x}_{1},\cdots ,{x}_{k})\in {A}^{k}$ are all distinct. Here,$$x_1,\dots ,x_k\in A$$ ${x}_{1},\cdots ,{x}_{k}\in A$C and are constants only depending on$$\Gamma $$ $\Gamma $p ,m andk such that . The crucial point here is the condition for the vectors$$\Gamma $\Gamma <p$
in the solution$$x_1,\dots ,x_k$$ ${x}_{1},\cdots ,{x}_{k}$ to be distinct. If we relax this condition and only demand that$$(x_1,\dots ,x_k)\in A^k$$ $({x}_{1},\cdots ,{x}_{k})\in {A}^{k}$ are not all equal, then the statement would follow easily from Tao’s slice rank polynomial method. However, handling the distinctness condition is much harder, and requires a new approach. While all previous combinatorial applications of the slice rank polynomial method have relied on the slice rank of diagonal tensors, we use a slice rank argument for a nondiagonal tensor in combination with combinatorial and probabilistic arguments.$$x_1,\dots ,x_k$$ ${x}_{1},\cdots ,{x}_{k}$