skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Constructive Separations and Their Consequences
For a complexity class $$C$$ and language $$L$$, a constructive separation of $$L\notin C$$ gives an efficient algorithm (also called a refuter) to findcounterexamples (bad inputs) for every $$C$$-algorithm attempting to decide $$L$$.We study the questions: Which lower bounds can be made constructive? What arethe consequences of constructive separations? We build a case thatconstructiveness serves as a dividing line between many weak lower bounds weknow how to prove, and strong lower bounds against $$P$$, $ZPP$, and $BPP$. Putanother way, constructiveness is the opposite of a complexity barrier: it is aproperty we want lower bounds to have. Our results fall into three broadcategories. 1. Our first set of results shows that, for many well-known lower boundsagainst streaming algorithms, one-tape Turing machines, and query complexity,as well as lower bounds for the Minimum Circuit Size Problem, making theselower bounds constructive would imply breakthrough separations ranging from$$EXP \neq BPP$$ to even $$P \neq NP$$. 2. Our second set of results shows that for most major open problems in lowerbounds against $$P$$, $ZPP$, and $BPP$, including $$P \neq NP$$, $$P \neq PSPACE$$,$$P \neq PP$$, $$ZPP \neq EXP$$, and $$BPP \neq NEXP$$, any proof of the separationwould further imply a constructive separation. Our results generalize earlierresults for $$P \neq NP$$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $$BPP\neq NEXP$$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannotbe made constructive. We observe that for all super-polynomially growingfunctions $$t$$, there are no constructive separations for detecting high$$t$$-time Kolmogorov complexity (a task which is known to be not in $$P$$) fromany complexity class, unconditionally.  more » « less
Award ID(s):
2127597
PAR ID:
10579530
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
TheoretiCS
Date Published:
Journal Name:
TheoretiCS
Volume:
Volume 3
ISSN:
2751-4838
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ta-Shma, Amnon (Ed.)
    A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM. 
    more » « less
  2. Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1-α′) n , where α = O (α)′. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes. 
    more » « less
  3. The Exponential-Time Hypothesis ( \(\mathtt {ETH} \) ) is a strengthening of the \(\mathcal {P} \ne \mathcal {NP} \) conjecture, stating that \(3\text{-}\mathtt {SAT} \) on n variables cannot be solved in (uniform) time 2 ϵ · n , for some ϵ > 0. In recent years, analogous hypotheses that are “exponentially-strong” forms of other classical complexity conjectures (such as \(\mathcal {NP}\nsubseteq \mathcal {BPP} \) or \(co\mathcal {NP}\nsubseteq \mathcal {NP} \) ) have also been introduced, and have become widely influential. In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds . We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that: (1) The Randomized Exponential-Time Hypothesis ( \(\mathsf {rETH} \) ) implies that \(\mathcal {BPP} \) can be simulated on “average-case” in deterministic (nearly-)polynomial-time (i.e., in time \(2^{\tilde{O}(\log (n))}=n^{\mathrm{loglog}(n)^{O(1)}} \) ). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i.e., with seed length \(\tilde{O}(\log (n)) \) ); this significantly improves the state-of-the-art in uniform “hardness-to-randomness” results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses. (2) The Non-Deterministic Exponential-Time Hypothesis ( \(\mathsf {NETH} \) ) implies that derandomization of \(\mathcal {BPP} \) is completely equivalent to circuit lower bounds against \(\mathcal {E} \) , and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of \(\mathsf {NETH} \) , and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it. Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if \(\mathtt {CircuitSAT} \) for circuits over n bits of size poly( n ) can be solved by probabilistic algorithms in time 2 n /polylog( n ) , then \(\mathcal {BPE} \) does not have circuits of quasilinear size. 
    more » « less
  4. null (Ed.)
    The complexity class ZPP NP[1] (corresponding to zero-error randomized algorithms with access to one NP oracle query) is known to have a number of curious properties. We further explore this class in the settings of time complexity, query complexity, and communication complexity. • For starters, we provide a new characterization: ZPP NP[1] equals the restriction of BPP NP[1] where the algorithm is only allowed to err when it forgoes the opportunity to make an NP oracle query. • Using the above characterization, we prove a query-to-communication lifting theorem , which translates any ZPP NP[1] decision tree lower bound for a function f into a ZPP NP[1] communication lower bound for a two-party version of f . • As an application, we use the above lifting theorem to prove that the ZPP NP[1] communication lower bound technique introduced by Göös, Pitassi, and Watson (ICALP 2016) is not tight. We also provide a “primal” characterization of this lower bound technique as a complexity class. 
    more » « less
  5. Characterizing the performance of no-regret dynamics in multi-player games is a foundational problem at the interface of online learning and game theory. Recent results have revealed that when all players adopt specific learning algorithms, it is possible to improve exponentially over what is predicted by the overly pessimistic no-regret framework in the traditional adversarial regime, thereby leading to faster convergence to the set of coarse correlated equilibria (CCE) – a standard game-theoretic equilibrium concept. Yet, despite considerable recent progress, the fundamental complexity barriers for learning in normal- and extensive-form games are poorly understood. In this paper, we make a step towards closing this gap by first showing that – barring major complexity breakthroughs – any polynomial-time learning algorithms in extensive-form games need at least 2log1/2−o(1) |T | iterations for the average regret to reach below even an absolute constant, where |T | is the number of nodes in the game. This establishes a superpolynomial separation between no-regret learning in normal- and extensive-form games, as in the former class a logarithmic number of iterations suffices to achieve constant average regret. Furthermore, our results imply that algorithms such as multiplicative weights update, as well as its optimistic counterpart, require at least 2(log logm)1/2−o(1) iterations to attain an O(1)-CCE in m-action normal-form games under any parameterization. These are the first non-trivial – and dimension-dependent – lower bounds in that setting for the most well-studied algorithms in the literature. From a technical standpoint, we follow a beautiful connection recently made by Foster, Golowich, and Kakade (ICML ’23) between sparse CCE and Nash equilibria in the context of Markov games. Consequently, our lower bounds rule out polynomial-time algorithms well beyond the traditional online learning framework, capturing techniques commonly used for accelerating centralized equilibrium computation. 
    more » « less