Abstract We propose a generic compiler that can convert any zero-knowledge (ZK) proof for SIMD circuits to general circuits efficiently, and an extension that can preserve the space complexity of the proof systems. Our compiler can immediately produce new results improving upon state of the art.By plugging in our compiler to Antman, an interactive sublinear-communication protocol, we improve the overall communication complexity for general circuits from$$\mathcal {O}(C^{3/4})$$ to$$\mathcal {O}(C^{1/2})$$ . Our implementation shows that for a circuit of size$$2^{27}$$ , it achieves up to$$83.6\times $$ improvement on communication compared to the state-of-the-art implementation. Its end-to-end running time is at least$$70\%$$ faster in a 10Mbps network.Using the recent results on compressed$$\varSigma $$ -protocol theory, we obtain a discrete-log-based constant-round zero-knowledge argument with$$\mathcal {O}(C^{1/2})$$ communication and common random string length, improving over the state of the art that has linear-size common random string and requires heavier computation.We improve the communication of a designatedn-verifier zero-knowledge proof from$$\mathcal {O}(nC/B+n^2B^2)$$ to$$\mathcal {O}(nC/B+n^2)$$ .To demonstrate the scalability of our compilers, we were able to extract a commit-and-prove SIMD ZK from Ligero and cast it in our framework. We also give one instantiation derived from LegoSNARK, demonstrating that the idea of CP-SNARK also fits in our methodology.
more »
« less
Improved Merlin–Arthur Protocols for Central Problems in Fine-Grained Complexity
Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$ . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$ time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ time).Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$ (where$$k\ge 3$$ ). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$ . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$ . Note this is optimal, as the matrix can have$$\Omega (n^2)$$ nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$ nondeterministic time algorithm.Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$ . We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$ -time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$ SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$ time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$ . Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$ time.Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$ , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$ time.
more »
« less
- Award ID(s):
- 2127597
- PAR ID:
- 10397595
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Algorithmica
- Volume:
- 85
- Issue:
- 8
- ISSN:
- 0178-4617
- Format(s):
- Medium: X Size: p. 2395-2426
- Size(s):
- p. 2395-2426
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract It is natural to generalize the online$$k$$ -Server problem by allowing each request to specify not only a pointp, but also a subsetSof servers that may serve it. To date, only a few special cases of this problem have been studied. The objective of the work presented in this paper has been to more systematically explore this generalization in the case of uniform and star metrics. For uniform metrics, the problem is equivalent to a generalization of Paging in which each request specifies not only a pagep, but also a subsetSof cache slots, and is satisfied by having a copy ofpin some slot inS. We call this problemSlot-Heterogenous Paging. In realistic settings only certain subsets of cache slots or servers would appear in requests. Therefore we parameterize the problem by specifying a family$${\mathcal {S}}\subseteq 2^{[k]}$$ of requestable slot sets, and we establish bounds on the competitive ratio as a function of the cache sizekand family$${\mathcal {S}}$$ :If all request sets are allowed ($${\mathcal {S}}=2^{[k]}\setminus \{\emptyset \}$$ ), the optimal deterministic and randomized competitive ratios are exponentially worse than for standard Paging ($${\mathcal {S}}=\{[k]\}$$ ).As a function of$$|{\mathcal {S}}|$$ andk, the optimal deterministic ratio is polynomial: at most$$O(k^2|{\mathcal {S}}|)$$ and at least$$\Omega (\sqrt{|{\mathcal {S}}|})$$ .For any laminar family$${\mathcal {S}}$$ of heighth, the optimal ratios areO(hk) (deterministic) and$$O(h^2\log k)$$ (randomized).The special case of laminar$${\mathcal {S}}$$ that we callAll-or-One Pagingextends standard Paging by allowing each request to specify a specific slot to put the requested page in. The optimal deterministic ratio forweightedAll-or-One Paging is$$\Theta (k)$$ . Offline All-or-One Paging is$$\mathbb{N}\mathbb{P}$$ -hard.Some results for the laminar case are shown via a reduction to the generalization of Paging in which each request specifies a set$$P$$ ofpages, and is satisfied by fetching any page from$$P$$ into the cache. The optimal ratios for the latter problem (with laminar family of heighth) are at mosthk(deterministic) and$$hH_k$$ (randomized).more » « less
-
Abstract We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in$$\mathsf {Quasi}\text {-}\mathsf {NP} = \mathsf {NTIME}[n^{(\log n)^{O(1)}}]$$ and other complexity classes do not have small circuits (in the worst case and/or on average) from various circuit classes$$\mathcal { C}$$ , by showing that$$\mathcal { C}$$ admits non-trivial satisfiability and/or#SAT algorithms which beat exhaustive search by a minor amount. In this paper, we present a new strong lower bound consequence of having a non-trivial#SAT algorithm for a circuit class$${\mathcal C}$$ . Say that a symmetric Boolean functionf(x1,…,xn) issparseif it outputs 1 onO(1) values of$${\sum }_{i} x_{i}$$ . We show that for every sparsef, and for all “typical”$$\mathcal { C}$$ , faster#SAT algorithms for$$\mathcal { C}$$ circuits imply lower bounds against the circuit class$$f \circ \mathcal { C}$$ , which may bestrongerthan$$\mathcal { C}$$ itself. In particular:#SAT algorithms fornk-size$$\mathcal { C}$$ -circuits running in 2n/nktime (for allk) implyNEXPdoes not have$$(f \circ \mathcal { C})$$ -circuits of polynomial size.#SAT algorithms for$$2^{n^{{\varepsilon }}}$$ -size$$\mathcal { C}$$ -circuits running in$$2^{n-n^{{\varepsilon }}}$$ time (for someε> 0) implyQuasi-NPdoes not have$$(f \circ \mathcal { C})$$ -circuits of polynomial size. Applying#SAT algorithms from the literature, one immediate corollary of our results is thatQuasi-NPdoes not haveEMAJ∘ACC0∘THRcircuits of polynomial size, whereEMAJis the “exact majority” function, improving previous lower bounds againstACC0[Williams JACM’14] andACC0∘THR[Williams STOC’14], [Murray-Williams STOC’18]. This is the first nontrivial lower bound against such a circuit class.more » « less
-
Abstract A search for leptoquark pair production decaying into$$te^- \bar{t}e^+$$ or$$t\mu ^- \bar{t}\mu ^+$$ in final states with multiple leptons is presented. The search is based on a dataset ofppcollisions at$$\sqrt{s}=13~\text {TeV} $$ recorded with the ATLAS detector during Run 2 of the Large Hadron Collider, corresponding to an integrated luminosity of 139 fb$$^{-1}$$ . Four signal regions, with the requirement of at least three light leptons (electron or muon) and at least two jets out of which at least one jet is identified as coming from ab-hadron, are considered based on the number of leptons of a given flavour. The main background processes are estimated using dedicated control regions in a simultaneous fit with the signal regions to data. No excess above the Standard Model background prediction is observed and 95% confidence level limits on the production cross section times branching ratio are derived as a function of the leptoquark mass. Under the assumption of exclusive decays into$$te^{-}$$ ($$t\mu ^{-}$$ ), the corresponding lower limit on the scalar mixed-generation leptoquark mass$$m_{\textrm{LQ}_{\textrm{mix}}^{\textrm{d}}}$$ is at 1.58 (1.59) TeV and on the vector leptoquark mass$$m_{{\tilde{U}}_1}$$ at 1.67 (1.67) TeV in the minimal coupling scenario and at 1.95 (1.95) TeV in the Yang–Mills scenario.more » « less
-
Abstract A search is reported for charge-parity$$CP$$ violation in$${{{\textrm{D}}}^{{0}}} \rightarrow {{\textrm{K}} _{\text {S}}^{{0}}} {{\textrm{K}} _{\text {S}}^{{0}}} $$ decays, using data collected in proton–proton collisions at$$\sqrt{s} = 13\,\text {Te}\hspace{-.08em}\text {V} $$ recorded by the CMS experiment in 2018. The analysis uses a dedicated data set that corresponds to an integrated luminosity of 41.6$$\,\text {fb}^{-1}$$ , which consists of about 10 billion events containing a pair of b hadrons, nearly all of which decay to charm hadrons. The flavor of the neutral D meson is determined by the pion charge in the reconstructed decays$${{{\textrm{D}}}^{{*+}}} \rightarrow {{{\textrm{D}}}^{{0}}} {{{\mathrm{\uppi }}}^{{+}}} $$ and$${{{\textrm{D}}}^{{*-}}} \rightarrow {\overline{{\textrm{D}}}^{{0}}} {{{\mathrm{\uppi }}}^{{-}}} $$ . The$$CP$$ asymmetry in$${{{\textrm{D}}}^{{0}}} \rightarrow {{\textrm{K}} _{\text {S}}^{{0}}} {{\textrm{K}} _{\text {S}}^{{0}}} $$ is measured to be$$A_{CP} ({{\textrm{K}} _{\text {S}}^{{0}}} {{\textrm{K}} _{\text {S}}^{{0}}} ) = (6.2 \pm 3.0 \pm 0.2 \pm 0.8)\%$$ , where the three uncertainties represent the statistical uncertainty, the systematic uncertainty, and the uncertainty in the measurement of the$$CP$$ asymmetry in the$${{{\textrm{D}}}^{{0}}} \rightarrow {{\textrm{K}} _{\text {S}}^{{0}}} {{{\mathrm{\uppi }}}^{{+}}} {{{\mathrm{\uppi }}}^{{-}}} $$ decay. This is the first$$CP$$ asymmetry measurement by CMS in the charm sector as well as the first to utilize a fully hadronic final state.more » « less