skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 1, 2026

Title: Instance-Wise Hardness and Refutation versus Derandomization for Arthur-Merlin Protocols
Abstract A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settingsblackboxderandomization, i.e., derandomization through pseudorandom generators, has been shown equivalent to lower bounds for decision problems against circuits.Recently, Chen and Tell (FOCS'21) established nearequivalences in the BPP setting betweenwhiteboxderandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function$$f$$ f on the given instance. Follow-up works managed to obtain full equivalences in the BPP setting by exploiting acompressionproperty of classical pseudorandom generator constructions. In particular, Chen, Tell, and Williams (FOCS'23) showed that derandomization of BPP is equivalent toconstructivelower bounds against algorithms that go through a compression phase.In this paper, we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness-to-derandomization direction, consider a length-preserving function$$f$$ f computable by a nondeterministic algorithm that runs in time$$n^a$$ n a . We show that if every Arthur-Merlin protocol that runs in time$$n^c$$ n c for$$c=O(\log^2 a)$$ c = O ( log 2 a ) can only compute$$f$$ f correctly on finitely many inputs, then AM is in NP. We also obtain equivalences between constructive lower bounds against Arthur-Merlin protocols that go through a compression phase and derandomization of AM viatargetedgenerators. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs of proximity for nondeterministic computations. As a by-product of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). By-products in the average-case setting include the first uniform hardness vs. randomness trade-offs for AM, as well as an unconditional mild derandomization result for AM.  more » « less
Award ID(s):
2312540
PAR ID:
10644915
Author(s) / Creator(s):
;
Publisher / Repository:
Springer
Date Published:
Journal Name:
computational complexity
Volume:
34
Issue:
2
ISSN:
1016-3328
Page Range / eLocation ID:
15:1-15:86
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ta-Shma, Amnon (Ed.)
    A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM. 
    more » « less
  2. Bouyer, Patricia; Srinivasan, Srikanth (Ed.)
    Many derandomization results for probabilistic decision processes have been ported to the setting of Arthur-Merlin protocols. Whereas the ultimate goal in the first setting consists of efficient simulations on deterministic machines (BPP vs. P problem), in the second setting it is efficient simulations on nondeterministic machines (AM vs. NP problem). Two notable exceptions that have not yet been ported from the first to the second setting are the equivalence between whitebox derandomization and leakage resilience (Liu and Pass, 2023), and the equivalence between whitebox derandomization and targeted pseudorandom generators (Goldreich, 2011). We develop both equivalences for mild derandomizations of Arthur-Merlin protocols, i.e., simulations on Σ₂-machines. Our techniques also apply to natural simulation models that are intermediate between nondeterministic machines and Σ₂-machines. 
    more » « less
  3. Abstract In a Merlin–Arthur proof system, the proof verifier (Arthur) accepts valid proofs (from Merlin) with probability 1, and rejects invalid proofs with probability arbitrarily close to 1. The running time of such a system is defined to be the length of Merlin’s proof plus the running time of Arthur. We provide new Merlin–Arthur proof systems for some key problems in fine-grained complexity. In several cases our proof systems have optimal running time. Our main results include:Certifying that a list ofnintegers has no 3-SUM solution can be done in Merlin–Arthur time$$\tilde{O}(n)$$ O ~ ( n ) . Previously, Carmosino et al. [ITCS 2016] showed that the problem has a nondeterministic algorithm running in$$\tilde{O}(n^{1.5})$$ O ~ ( n 1.5 ) time (that is, there is a proof system with proofs of length$$\tilde{O}(n^{1.5})$$ O ~ ( n 1.5 ) and a deterministic verifier running in$$\tilde{O}(n^{1.5})$$ O ~ ( n 1.5 ) time).Counting the number ofk-cliques with total edge weight equal to zero in ann-node graph can be done in Merlin–Arthur time$${\tilde{O}}(n^{\lceil k/2\rceil })$$ O ~ ( n k / 2 ) (where$$k\ge 3$$ k 3 ). For oddk, this bound can be further improved for sparse graphs: for example, counting the number of zero-weight triangles in anm-edge graph can be done in Merlin–Arthur time$${\tilde{O}}(m)$$ O ~ ( m ) . Previous Merlin–Arthur protocols by Williams [CCC’16] and Björklund and Kaski [PODC’16] could only countk-cliques in unweighted graphs, and had worse running times for smallk.Computing the All-Pairs Shortest Distances matrix for ann-node graph can be done in Merlin–Arthur time$$\tilde{O}(n^2)$$ O ~ ( n 2 ) . Note this is optimal, as the matrix can have$$\Omega (n^2)$$ Ω ( n 2 ) nonzero entries in general. Previously, Carmosino et al. [ITCS 2016] showed that this problem has an$$\tilde{O}(n^{2.94})$$ O ~ ( n 2.94 ) nondeterministic time algorithm.Certifying that ann-variablek-CNF is unsatisfiable can be done in Merlin–Arthur time$$2^{n/2 - n/O(k)}$$ 2 n / 2 - n / O ( k ) . We also observe an algebrization barrier for the previous$$2^{n/2}\cdot \textrm{poly}(n)$$ 2 n / 2 · poly ( n ) -time Merlin–Arthur protocol of R. Williams [CCC’16] for$$\#$$ # SAT: in particular, his protocol algebrizes, and we observe there is no algebrizing protocol fork-UNSAT running in$$2^{n/2}/n^{\omega (1)}$$ 2 n / 2 / n ω ( 1 ) time. Therefore we have to exploit non-algebrizing properties to obtain our new protocol.Certifying a Quantified Boolean Formula is true can be done in Merlin–Arthur time$$2^{4n/5}\cdot \textrm{poly}(n)$$ 2 4 n / 5 · poly ( n ) . Previously, the only nontrivial result known along these lines was an Arthur–Merlin–Arthur protocol (where Merlin’s proof depends on some of Arthur’s coins) running in$$2^{2n/3}\cdot \textrm{poly}(n)$$ 2 2 n / 3 · poly ( n ) time.Due to the centrality of these problems in fine-grained complexity, our results have consequences for many other problems of interest. For example, our work implies that certifying there is no Subset Sum solution tonintegers can be done in Merlin–Arthur time$$2^{n/3}\cdot \textrm{poly}(n)$$ 2 n / 3 · poly ( n ) , improving on the previous best protocol by Nederlof [IPL 2017] which took$$2^{0.49991n}\cdot \textrm{poly}(n)$$ 2 0.49991 n · poly ( n ) time. 
    more » « less
  4. Abstract We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in$$\mathsf {Quasi}\text {-}\mathsf {NP} = \mathsf {NTIME}[n^{(\log n)^{O(1)}}]$$ Quasi - NP = NTIME [ n ( log n ) O ( 1 ) ] and other complexity classes do not have small circuits (in the worst case and/or on average) from various circuit classes$$\mathcal { C}$$ C , by showing that$$\mathcal { C}$$ C admits non-trivial satisfiability and/or#SAT algorithms which beat exhaustive search by a minor amount. In this paper, we present a new strong lower bound consequence of having a non-trivial#SAT algorithm for a circuit class$${\mathcal C}$$ C . Say that a symmetric Boolean functionf(x1,…,xn) issparseif it outputs 1 onO(1) values of$${\sum }_{i} x_{i}$$ i x i . We show that for every sparsef, and for all “typical”$$\mathcal { C}$$ C , faster#SAT algorithms for$$\mathcal { C}$$ C circuits imply lower bounds against the circuit class$$f \circ \mathcal { C}$$ f C , which may bestrongerthan$$\mathcal { C}$$ C itself. In particular:#SAT algorithms fornk-size$$\mathcal { C}$$ C -circuits running in 2n/nktime (for allk) implyNEXPdoes not have$$(f \circ \mathcal { C})$$ ( f C ) -circuits of polynomial size.#SAT algorithms for$$2^{n^{{\varepsilon }}}$$ 2 n ε -size$$\mathcal { C}$$ C -circuits running in$$2^{n-n^{{\varepsilon }}}$$ 2 n n ε time (for someε> 0) implyQuasi-NPdoes not have$$(f \circ \mathcal { C})$$ ( f C ) -circuits of polynomial size. Applying#SAT algorithms from the literature, one immediate corollary of our results is thatQuasi-NPdoes not haveEMAJ∘ACC0∘THRcircuits of polynomial size, whereEMAJis the “exact majority” function, improving previous lower bounds againstACC0[Williams JACM’14] andACC0∘THR[Williams STOC’14], [Murray-Williams STOC’18]. This is the first nontrivial lower bound against such a circuit class. 
    more » « less
  5. Abstract Let$$f$$ f be an analytic polynomial of degree at most$$K-1$$ K 1 . A classical inequality of Bernstein compares the supremum norm of$$f$$ f over the unit circle to its supremum norm over the sampling set of the$$K$$ K -th roots of unity. Many extensions of this inequality exist, often understood under the umbrella of Marcinkiewicz–Zygmund-type inequalities for$$L^{p},1\le p\leq \infty $$ L p , 1 p norms. We study dimension-free extensions of these discretization inequalities in the high-dimension regime, where existing results construct sampling sets with cardinality growing with the total degree of the polynomial. In this work we show that dimension-free discretizations are possible with sampling sets whose cardinality is independent of$$\deg (f)$$ deg ( f ) and is instead governed by the maximumindividualdegree of$$f$$ f ;i.e., the largest degree of$$f$$ f when viewed as a univariate polynomial in any coordinate. For example, we find that for$$n$$ n -variate analytic polynomials$$f$$ f of degree at most$$d$$ d and individual degree at most$$K-1$$ K 1 ,$$\|f\|_{L^{\infty }(\mathbf{D}^{n})}\leq C(X)^{d}\|f\|_{L^{\infty }(X^{n})}$$ f L ( D n ) C ( X ) d f L ( X n ) for any fixed$$X$$ X in the unit disc$$\mathbf{D}$$ D with$$|X|=K$$ | X | = K . The dependence on$$d$$ d in the constant is tight for such small sampling sets, which arise naturally for example when studying polynomials of bounded degree coming from functions on products of cyclic groups. As an application we obtain a proof of the cyclic group Bohnenblust–Hille inequality with an explicit constant$$\mathcal{O}(\log K)^{2d}$$ O ( log K ) 2 d
    more » « less