skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distinguishing, Predicting, and Certifying: On the Long Reach of Partial Notions of Pseudorandomness
This paper revisits the study of two classical technical tools in theoretical computer science: Yao's trans-formation of distinguishers to next-bit predictors (FOCS 1982), and the “reconstruction paradigm” in pseudorandomness (e.g., as in Nisan and Wigderson, JCSS 1994). Recent works of Pyne, Raz, and Zhan (FOCS 2023) and Doron, Pyne, and Tell (STOC 2024) showed that both of these tools can be derandomized in the specific context of read-once branching programs (ROBPs), but left open the question of de randomizing them in more general settings. Our main contributions give appealing evidence that derandomization of the two tools is possible in general settings, show surprisingly strong consequences of such derandomization, and reveal several new settings where such derandomization is unconditionally possible for algorithms stronger than ROBPs (with useful consequences). Specifically: •We show that derandomizing these tools is equivalent to general derandomization. Specifically, we show that derandomizing distinguish - to- predict transformations is equivalent to prBPP=prP, and that derandomized reconstruction procedures (in a more general sense that we introduce) is equivalent to prBPP=prZPP. These statements hold even when scaled down to weak circuit classes and to algorithms that run in super-polynomial time. •Our main technical contributions are unconditional constructions of derandomized versions of Yao's transformation (or reductions of this task to other problems) for classes and for algorithms beyond ROBPs. Consequently, we deduce new results: A significant relaxation of the hypotheses required to derandomize the isolation lemma for logspace algorithms and deduce that NL=UL; and proofs that de-randomization necessitates targeted PRGs in catalytic logspace (unconditionally) and in logspace (conditionally). In addition, we introduce a natural subclass of prZPP that has been implicitly studied in recent works (Korten FOCS 2021, CCC 2022): The class of problems reducible to a problem called “Lossy Code”. We provide a structural characterization for this class in terms of derandomized reconstruction procedures, and show that this characterization is robust to several natural variations. Lastly, we present alternative proofs for classical results in the theory of pseudorandomness (such as two-sided derandomization reducing to one-sided), relying on the notion of deterministically transforming distinguishers to predictors as the main technical tool.  more » « less
Award ID(s):
2310818
PAR ID:
10575476
Author(s) / Creator(s):
; ;
Publisher / Repository:
65th IEEE Annual Symposium on Foundations of Computer Science (FOCS 2024)
Date Published:
Format(s):
Medium: X
Location:
Chicago, IL
Sponsoring Org:
National Science Foundation
More Like this
  1. Guruswami, Venkatesan (Ed.)
    In a recent work, Chen, Hoza, Lyu, Tal and Wu (FOCS 2023) showed an improved error reduction framework for the derandomization of regular read-once branching programs (ROBPs). Their result is based on a clever modification to the inverse Laplacian perspective of space-bounded derandomization, which was originally introduced by Ahmadinejad, Kelner, Murtagh, Peebles, Sidford and Vadhan (FOCS 2020). In this work, we give an alternative error reduction framework for regular ROBPs. Our new framework is based on a binary recursive formula from the work of Chattopadhyay and Liao (CCC 2020), that they used to construct weighted pseudorandom generators (WPRGs) for general ROBPs. Based on our new error reduction framework, we give alternative proofs to the following results for regular ROBPs of length n and width w, both of which were proved in the work of Chen et al. using their error reduction: - There is a WPRG with error ε that has seed length Õ(log(n)(√{log(1/ε)}+log(w))+log(1/ε)). - There is a (non-black-box) deterministic algorithm which estimates the expectation of any such program within error ±ε with space complexity Õ(log(nw)⋅log log(1/ε)). This was first proved in the work of Ahmadinejad et al., but the proof by Chen et al. is simpler. Because of the binary recursive nature of our new framework, both of our proofs are based on a straightforward induction that is arguably simpler than the Laplacian-based proof in the work of Chen et al. In fact, because of its simplicity, our proof of the second result directly gives a slightly stronger claim: our algorithm computes a ε-singular value approximation (a notion of approximation introduced in a recent work by Ahmadinejad, Peebles, Pyne, Sidford and Vadhan (FOCS 2023)) of the random walk matrix of the given ROBP in space Õ(log(nw)⋅log log(1/ε)). It is not clear how to get this stronger result from the previous proofs. 
    more » « less
  2. Ta-Shma, Amnon (Ed.)
    A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settings blackbox derandomization, i.e., derandomization through pseudo-random generators, has been shown equivalent to lower bounds for decision problems against circuits. Recently, Chen and Tell (FOCS'21) established near-equivalences in the BPP setting between whitebox derandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function f on the given instance. In this paper we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness to derandomization direction, consider a length-preserving function f computable by a nondeterministic algorithm that runs in time n^a. We show that if every Arthur-Merlin protocol that runs in time n^c for c = O(log² a) can only compute f correctly on finitely many inputs, then AM is in NP. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs for nondeterministic computations. As a byproduct of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). Byproducts in the average-case setting include the first uniform hardness vs. randomness tradeoffs for AM, as well as an unconditional mild derandomization result for AM. 
    more » « less
  3. Abstract A fundamental question in computational complexity asks whether probabilistic polynomial-time algorithms can be simulated deterministically with a small overhead in time (the BPP vs. P problem). A corresponding question in the realm of interactive proofs asks whether Arthur-Merlin protocols can be simulated nondeterministically with a small overhead in time (the AM vs. NP problem). Both questions are intricately tied to lower bounds. Prominently, in both settingsblackboxderandomization, i.e., derandomization through pseudorandom generators, has been shown equivalent to lower bounds for decision problems against circuits.Recently, Chen and Tell (FOCS'21) established nearequivalences in the BPP setting betweenwhiteboxderandomization and lower bounds for multi-bit functions against algorithms on almost-all inputs. The key ingredient is a technique to translate hardness into targeted hitting sets in an instance-wise fashion based on a layered arithmetization of the evaluation of a uniform circuit computing the hard function$$f$$ f on the given instance. Follow-up works managed to obtain full equivalences in the BPP setting by exploiting acompressionproperty of classical pseudorandom generator constructions. In particular, Chen, Tell, and Williams (FOCS'23) showed that derandomization of BPP is equivalent toconstructivelower bounds against algorithms that go through a compression phase.In this paper, we develop a corresponding technique for Arthur-Merlin protocols and establish similar near-equivalences in the AM setting. As an example of our results in the hardness-to-derandomization direction, consider a length-preserving function$$f$$ f computable by a nondeterministic algorithm that runs in time$$n^a$$ n a . We show that if every Arthur-Merlin protocol that runs in time$$n^c$$ n c for$$c=O(\log^2 a)$$ c = O ( log 2 a ) can only compute$$f$$ f correctly on finitely many inputs, then AM is in NP. We also obtain equivalences between constructive lower bounds against Arthur-Merlin protocols that go through a compression phase and derandomization of AM viatargetedgenerators. Our main technical contribution is the construction of suitable targeted hitting-set generators based on probabilistically checkable proofs of proximity for nondeterministic computations. As a by-product of our constructions, we obtain the first result indicating that whitebox derandomization of AM may be equivalent to the existence of targeted hitting-set generators for AM, an issue raised by Goldreich (LNCS, 2011). By-products in the average-case setting include the first uniform hardness vs. randomness trade-offs for AM, as well as an unconditional mild derandomization result for AM. 
    more » « less
  4. We provide compelling evidence for the potential of hardness-vs.-randomness approaches to make progress on the long-standing problem of derandomizing space-bounded computation. Our first contribution is a derandomization of bounded-space machines from hardness assumptions for classes of uniform deterministic algorithms, for which strong (but non-matching) lower bounds can be unconditionally proved. We prove one such result for showing that BPL=L “on average”, and another similar result for showing that BPSPACE[O(n)]=DSPACE[O(n)]. Next, we significantly improve the main results of prior works on hardness-vs.-randomness for logspace. As one of our results, we relax the assumptions needed for derandomization with minimal memory footprint (i.e., showing BPSPACE[S]⊆ DSPACE[c · S] for a small constant c), by completely eliminating a cryptographic assumption that was needed in prior work. A key contribution underlying all of our results is non-black-box use of the descriptions of space-bounded Turing machines, when proving hardness-to-randomness results. That is, the crucial point allowing us to prove our results is that we use properties that are specific to space-bounded machines. 
    more » « less
  5. Ta-Shma, Amnon (Ed.)
    A central open problem in complexity theory concerns the question of whether all efficient randomized algorithms can be simulated by efficient deterministic algorithms. The celebrated "hardness v.s. randomness” paradigm pioneered by Blum-Micali (SIAM JoC’84), Yao (FOCS’84) and Nisan-Wigderson (JCSS’94) presents hardness assumptions under which e.g., prBPP = prP (so-called "high-end derandomization), or prBPP ⊆ prSUBEXP (so-called "low-end derandomization), and more generally, under which prBPP ⊆ prDTIME(𝒞) where 𝒞 is a "nice" class (closed under composition with a polynomial), but these hardness assumptions are not known to also be necessary for such derandomization. In this work, following the recent work by Chen and Tell (FOCS’21) that considers "almost-all-input" hardness of a function f (i.e., hardness of computing f on more than a finite number of inputs), we consider "almost-all-input" leakage-resilient hardness of a function f - that is, hardness of computing f(x) even given, say, √|x| bits of leakage of f(x). We show that leakage-resilient hardness characterizes derandomization of prBPP (i.e., gives a both necessary and sufficient condition for derandomization), both in the high-end and in the low-end setting. In more detail, we show that there exists a constant c such that for every function T, the following are equivalent: - prBPP ⊆ prDTIME(poly(T(poly(n)))); - Existence of a poly(T(poly(n)))-time computable function f :{0,1}ⁿ → {0,1}ⁿ that is almost-all-input leakage-resilient hard with respect to n^c-time probabilistic algorithms. As far as we know, this is the first assumption that characterizes derandomization in both the low-end and the high-end regime. Additionally, our characterization naturally extends also to derandomization of prMA, and also to average-case derandomization, by appropriately weakening the requirements on the function f. In particular, for the case of average-case (a.k.a. "effective") derandomization, we no longer require the function to be almost-all-input hard, but simply satisfy the more standard notion of average-case leakage-resilient hardness (w.r.t., every samplable distribution), whereas for derandomization of prMA, we instead consider leakage-resilience for relations. 
    more » « less