In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include: Certifying that a list of Counting the number of Computing the All-Pairs Shortest Distances matrix for an Certifying that an Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time
Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution to
We prove that
- Award ID(s):
- 1729369
- NSF-PAR ID:
- 10411433
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Communications in Mathematical Physics
- Volume:
- 401
- Issue:
- 2
- ISSN:
- 0010-3616
- Page Range / eLocation ID:
- p. 1531-1626
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract n integers has no 3-SUM solution can be done in Merlin–Arthur time . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n)$$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ time).$$\tilde{O}(n^{1.5})$$ k -cliques with total edge weight equal to zero in ann -node graph can be done in Merlin–Arthur time (where$${\tilde{O}}(n^{\lceil k/2\rceil })$$ ). For odd$$k\ge 3$$ k , this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm -edge graph can be done in Merlin–Arthur time . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only count$${\tilde{O}}(m)$$ k -cliques in unweighted graphs, and had worse running times for smallk .n -node graph can be done in Merlin–Arthur time . Note this is optimal, as the matrix can have$$\tilde{O}(n^2)$$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\Omega (n^2)$$ nondeterministic time algorithm.$$\tilde{O}(n^{2.94})$$ n -variablek -CNF is unsatisfiable can be done in Merlin–Arthur time . We also observe an algebrization barrier for the previous$$2^{n/2 - n/O(k)}$$ -time Merlin–Arthur protocol of R. Williams [CCC’16] for$$2^{n/2}\cdot \textrm{poly}(n)$$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol for$$\#$$ k -UNSAT running in time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.$$2^{n/2}/n^{\omega (1)}$$ . Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{4n/5}\cdot \textrm{poly}(n)$$ time.$$2^{2n/3}\cdot \textrm{poly}(n)$$ n integers can be done in Merlin–Arthur time , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{n/3}\cdot \textrm{poly}(n)$$ time.$$2^{0.49991n}\cdot \textrm{poly}(n)$$ -
Abstract We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution
and the corresponding noiseless output distribution$$p_{\text {noisy}}$$ shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmark$$p_{\text {ideal}}$$ F that measures this correlation behaves as , where$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$ is the probability of error per circuit location and$$\epsilon $$ s is the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution and the uniform distribution$$p_{\text {noisy}}$$ decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {unif}}$$ . In other words, although at least one local error occurs with probability$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$ , the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$1-F$$ . Thus, the “white-noise approximation” is meaningful when$$O(F\epsilon \sqrt{s})$$ , a quadratically weaker condition than the$$\epsilon \sqrt{s} \ll 1$$ requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$\epsilon s\ll 1$$ , which corresponds to only$$s \ge \Omega (n\log (n))$$ logarithmic depth circuits, and if, additionally, the inverse error rate satisfies , which is needed to ensure errors are scrambled faster than$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ F decays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds. -
Abstract The elliptic flow
of$$(v_2)$$ mesons from beauty-hadron decays (non-prompt$${\textrm{D}}^{0}$$ was measured in midcentral (30–50%) Pb–Pb collisions at a centre-of-mass energy per nucleon pair$${\textrm{D}}^{0})$$ TeV with the ALICE detector at the LHC. The$$\sqrt{s_{\textrm{NN}}} = 5.02$$ mesons were reconstructed at midrapidity$${\textrm{D}}^{0}$$ from their hadronic decay$$(|y|<0.8)$$ , in the transverse momentum interval$$\mathrm {D^0 \rightarrow K^-\uppi ^+}$$ GeV/$$2< p_{\textrm{T}} < 12$$ c . The result indicates a positive for non-prompt$$v_2$$ mesons with a significance of 2.7$${{\textrm{D}}^{0}}$$ . The non-prompt$$\sigma $$ -meson$${{\textrm{D}}^{0}}$$ is lower than that of prompt non-strange D mesons with 3.2$$v_2$$ significance in$$\sigma $$ , and compatible with the$$2< p_\textrm{T} < 8~\textrm{GeV}/c$$ of beauty-decay electrons. Theoretical calculations of beauty-quark transport in a hydrodynamically expanding medium describe the measurement within uncertainties.$$v_2$$ -
Abstract The double differential cross sections of the Drell–Yan lepton pair (
, dielectron or dimuon) production are measured as functions of the invariant mass$$\ell ^+\ell ^-$$ , transverse momentum$$m_{\ell \ell }$$ , and$$p_{\textrm{T}} (\ell \ell )$$ . The$$\varphi ^{*}_{\eta }$$ observable, derived from angular measurements of the leptons and highly correlated with$$\varphi ^{*}_{\eta }$$ , is used to probe the low-$$p_{\textrm{T}} (\ell \ell )$$ region in a complementary way. Dilepton masses up to 1$$p_{\textrm{T}} (\ell \ell )$$ are investigated. Additionally, a measurement is performed requiring at least one jet in the final state. To benefit from partial cancellation of the systematic uncertainty, the ratios of the differential cross sections for various$$\,\text {Te\hspace{-.08em}V}$$ ranges to those in the Z mass peak interval are presented. The collected data correspond to an integrated luminosity of 36.3$$m_{\ell \ell }$$ of proton–proton collisions recorded with the CMS detector at the LHC at a centre-of-mass energy of 13$$\,\text {fb}^{-1}$$ . Measurements are compared with predictions based on perturbative quantum chromodynamics, including soft-gluon resummation.$$\,\text {Te\hspace{-.08em}V}$$ -
Abstract Cuprous oxide (
) has recently emerged as a promising material in solid-state quantum technology, specifically for its excitonic Rydberg states characterized by large principal quantum numbers ($$\hbox {Cu}{}_2\hbox {O}$$ n ). The significant wavefunction size of these highly-excited states (proportional to ) enables strong long-range dipole-dipole (proportional to$$n^2$$ ) and van der Waals interactions (proportional to$$n^4$$ ). Currently, the highest-lying Rydberg states are found in naturally occurring$$n^{11}$$ . However, for technological applications, the ability to grow high-quality synthetic samples is essential. The fabrication of thin-film$$\hbox {Cu}_2\hbox {O}$$ samples is of particular interest as they hold potential for observing extreme single-photon nonlinearities through the Rydberg blockade. Nevertheless, due to the susceptibility of high-lying states to charged impurities, growing synthetic samples of sufficient quality poses a substantial challenge. This study successfully demonstrates the CMOS-compatible synthesis of a$$\hbox {Cu}{}_2\hbox {O}$$ thin film on a transparent substrate that showcases Rydberg excitons up to$$\hbox {Cu}{}_2\hbox {O}$$ which is readily suitable for photonic device fabrications. These findings mark a significant advancement towards the realization of scalable and on-chip integrable Rydberg quantum technologies.$$n = 8$$