We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gatelevel error with probability close to one. We model noise by adding a pair of weak, unital, singlequbit noise channels after each twoqubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
We investigate electroabsorption (EA) in organic semiconductor microcavities to understand whether strong lightmatter coupling nontrivially alters their nonlinear optical [
 NSFPAR ID:
 10387562
 Publisher / Repository:
 Nature Publishing Group
 Date Published:
 Journal Name:
 Nature Communications
 Volume:
 13
 Issue:
 1
 ISSN:
 20411723
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this

Abstract and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ ${p}_{\text{noisy}}$ shrink exponentially with the expected number of gatelevel errors. Specifically, the linear crossentropy benchmark$$p_{\text {ideal}}$$ ${p}_{\text{ideal}}$F that measures this correlation behaves as , where$$F=\text {exp}(2s\epsilon \pm O(s\epsilon ^2))$$ $F=\text{exp}(2s\u03f5\pm O\left(s{\u03f5}^{2}\right))$ is the probability of error per circuit location and$$\epsilon $$ $\u03f5$s is the number of twoqubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ ${p}_{\text{noisy}}$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ ${p}_{\text{unif}}$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1F)p_{\text {unif}}$$ ${p}_{\text{noisy}}\approx F{p}_{\text{ideal}}+(1F){p}_{\text{unif}}$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1F$$ $1F$ . Thus, the “whitenoise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ $O\left(F\u03f5\sqrt{s}\right)$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ $\u03f5\sqrt{s}\ll 1$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ $\u03f5s\ll 1$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ $s\ge \Omega (nlog(n\left)\right)$logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{1} \ge {\tilde{\Omega }}(n)$$ ${\u03f5}^{1}\ge \stackrel{~}{\Omega}\left(n\right)$F decays. The whitenoise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexitytheoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from secondmoment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. 
Abstract We present the first unquenched latticeQCD calculation of the form factors for the decay
at nonzero recoil. Our analysis includes 15 MILC ensembles with$$B\rightarrow D^*\ell \nu $$ $B\to {D}^{\ast}\ell \nu $ flavors of asqtad sea quarks, with a strange quark mass close to its physical mass. The lattice spacings range from$$N_f=2+1$$ ${N}_{f}=2+1$ fm down to 0.045 fm, while the ratio between the light and the strangequark masses ranges from 0.05 to 0.4. The valence$$a\approx 0.15$$ $a\approx 0.15$b andc quarks are treated using the Wilsonclover action with the Fermilab interpretation, whereas the light sector employs asqtad staggered fermions. We extrapolate our results to the physical point in the continuum limit using rooted staggered heavylight meson chiral perturbation theory. Then we apply a modelindependent parametrization to extend the form factors to the full kinematic range. With this parametrization we perform a joint latticeQCD/experiment fit using several experimental datasets to determine the CKM matrix element . We obtain$$V_{cb}$$ ${V}_{\mathrm{cb}}$ . The first error is theoretical, the second comes from experiment and the last one includes electromagnetic and electroweak uncertainties, with an overall$$\left V_{cb}\right = (38.40 \pm 0.68_{\text {th}} \pm 0.34_{\text {exp}} \pm 0.18_{\text {EM}})\times 10^{3}$$ $\left({V}_{\mathrm{cb}}\right)=(38.40\pm 0.{68}_{\text{th}}\pm 0.{34}_{\text{exp}}\pm 0.{18}_{\text{EM}})\times {10}^{3}$ , which illustrates the tensions between the experimental data sets, and between theory and experiment. This result is in agreement with previous exclusive determinations, but the tension with the inclusive determination remains. Finally, we integrate the differential decay rate obtained solely from lattice data to predict$$\chi ^2\text {/dof} = 126/84$$ ${\chi}^{2}\text{/dof}=126/84$ , which confirms the current tension between theory and experiment.$$R(D^*) = 0.265 \pm 0.013$$ $R\left({D}^{\ast}\right)=0.265\pm 0.013$ 
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in finegrained complexity. In several cases our proof systems have optimal running time. Our main results include:
Certifying that a list of
n integers has no 3SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ $\stackrel{~}{O}\left(n\right)$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$ time).$$\tilde{O}(n^{1.5})$$ $\stackrel{~}{O}\left({n}^{1.5}\right)$Counting the number of
k cliques with total edge weight equal to zero in ann node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ $\stackrel{~}{O}\left({n}^{\lceil k/2\rceil}\right)$ ). For odd$$k\ge 3$$ $k\ge 3$k , this bound can be further improved for sparse graphs: for example, counting the number of zeroweight triangles in anm edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ $\stackrel{~}{O}\left(m\right)$k cliques in unweighted graphs, and had worse running times for smallk .Computing the AllPairs Shortest Distances matrix for an
n node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ $\stackrel{~}{O}\left({n}^{2}\right)$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ $\Omega \left({n}^{2}\right)$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ $\stackrel{~}{O}\left({n}^{2.94}\right)$Certifying that an
n variablek CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2  n/O(k)}$$ ${2}^{n/2n/O\left(k\right)}$ time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ ${2}^{n/2}\xb7\text{poly}\left(n\right)$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ $\#$k UNSAT running in time. Therefore we have to exploit nonalgebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ ${2}^{n/2}/{n}^{\omega \left(1\right)}$ Due to the centrality of these problems in finegrained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution toCertifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
. Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ ${2}^{4n/5}\xb7\text{poly}\left(n\right)$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ ${2}^{2n/3}\xb7\text{poly}\left(n\right)$n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ ${2}^{n/3}\xb7\text{poly}\left(n\right)$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ ${2}^{0.49991n}\xb7\text{poly}\left(n\right)$ 
Abstract We prove that
depth local random quantum circuits with two qudit nearestneighbor gates on a$${{\,\textrm{poly}\,}}(t) \cdot n^{1/D}$$ $\phantom{\rule{0ex}{0ex}}\text{poly}\phantom{\rule{0ex}{0ex}}\left(t\right)\xb7{n}^{1/D}$D dimensional lattice withn qudits are approximatet designs in various measures. These include the “monomial” measure, meaning that the monomials of a random circuit from this family have expectation close to the value that would result from the Haar measure. Previously, the best bound was due to Brandão–Harrow–Horodecki (Commun Math Phys 346(2):397–434, 2016) for$${{\,\textrm{poly}\,}}(t)\cdot n$$ $\phantom{\rule{0ex}{0ex}}\text{poly}\phantom{\rule{0ex}{0ex}}\left(t\right)\xb7n$ . We also improve the “scrambling” and “decoupling” bounds for spatially local random circuits due to Brown and Fawzi (Scrambling speed of random quantum circuits, 2012). One consequence of our result is that assuming the polynomial hierarchy ($$D=1$$ $D=1$ ) is infinite and that certain counting problems are$${{\,\mathrm{\textsf{PH}}\,}}$$ $\phantom{\rule{0ex}{0ex}}\mathrm{PH}\phantom{\rule{0ex}{0ex}}$ hard “on average”, sampling within total variation distance from these circuits is hard for classical computers. Previously, exact sampling from the outputs of even constantdepth quantum circuits was known to be hard for classical computers under these assumptions. However the standard strategy for extending this hardness result to approximate sampling requires the quantum circuits to have a property called “anticoncentration”, meaning roughly that the output has nearmaximal entropy. Unitary 2designs have the desired anticoncentration property. Our result improves the required depth for this level of anticoncentration from linear depth to a sublinear value, depending on the geometry of the interactions. This is relevant to a recent experiment by the Google Quantum AI group to perform such a sampling task with 53 qubits on a twodimensional lattice (Arute in Nature 574(7779):505–510, 2019; Boixo et al. in Nate Phys 14(6):595–600, 2018) (and related experiments by USTC), and confirms their conjecture that$$\#{\textsf{P}}$$ $\#P$ depth suffices for anticoncentration. The proof is based on a previous construction of$$O(\sqrt{n})$$ $O\left(\sqrt{n}\right)$t designs by Brandão et al. (2016), an analysis of how approximate designs behave under composition, and an extension of the quasiorthogonality of permutation operators developed by Brandão et al. (2016). Different versions of the approximate design condition correspond to different norms, and part of our contribution is to introduce the norm corresponding to anticoncentration and to establish equivalence between these various norms for lowdepth circuits. For random circuits with longrange gates, we use different methods to show that anticoncentration happens at circuit size corresponding to depth$$O(n\ln ^2 n)$$ $O\left(n{ln}^{2}n\right)$ . We also show a lower bound of$$O(\ln ^3 n)$$ $O\left({ln}^{3}n\right)$ for the size of such circuit in this case. We also prove that anticoncentration is possible in depth$$\Omega (n \ln n)$$ $\Omega (nlnn)$ (size$$O(\ln n \ln \ln n)$$ $O(lnnlnlnn)$ ) using a different model.$$O(n \ln n \ln \ln n)$$ $O(nlnnlnlnn)$ 
Abstract The notion of generalized rank in the context of multiparameter persistence has become an important ingredient for defining interesting homological structures such as generalized persistence diagrams. However, its efficient computation has not yet been studied in the literature. We show that the generalized rank over a finite interval
I of a indexed persistence module$$\textbf{Z}^2$$ ${Z}^{2}$M is equal to the generalized rank of the zigzag module that is induced on a certain path inI tracing mostly its boundary. Hence, we can compute the generalized rank ofM overI by computing the barcode of the zigzag module obtained by restricting to that path. IfM is the homology of a bifiltrationF of simplices (while accounting for multicriticality) and$$t$$ $t$I consists of points, this computation takes$$t$$ $t$ time where$$O(t^\omega )$$ $O\left({t}^{\omega}\right)$ is the exponent of matrix multiplication. We apply this result to obtain an improved algorithm for the following problem. Given a bifiltration inducing a module$$\omega \in [2,2.373)$$ $\omega \in [2,2.373)$M , determine whetherM is interval decomposable and, if so, compute all intervals supporting its indecomposable summands.