skip to main content


Search for: All records

Creators/Authors contains: "Moshkovitz, Dana"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Megow, Nicole ; Smith, Adam (Ed.)
    We prove that for some constant a > 1, for all k ≤ a, MATIME[n^{k(1+o(1))}]/1 ⊄ SIZE[O(n^k)], for some specific o(1) function. This is a super linear polynomial circuit lower bound. Previously, Santhanam [Santhanam, 2007] showed that there exists a constant c>1 such that for all k>1: MATIME[n^{ck}]/1 ⊄ SIZE[O(n^k)]. Inherently to Santhanam’s proof, c is a large constant and there is no upper bound on c. Using ideas from Murray and Williams [Murray and Williams, 2018], one can show for all k>1: MATIME [n^{10 k²}]/1 ⊄ SIZE[O(n^k)]. To prove this result, we construct the first PCP for SPACE[n] with quasi-linear verifier time: our PCP has a Õ(n) time verifier, Õ(n) space prover, O(log(n)) queries, and polynomial alphabet size. Prior to this work, PCPs for SPACE[O(n)] had verifiers that run in Ω(n²) time. This PCP also proves that NE has MIP verifiers which run in time Õ(n). 
    more » « less
    Free, publicly-accessible full text available September 4, 2025
  2. Free, publicly-accessible full text available July 7, 2025
  3. Iwata, Satoru ; Kakimura, Naonori (Ed.)
    In a regular PCP the verifier queries each proof symbol in the same number of tests. This number is called the degree of the proof, and it is at least 1/(sq) where s is the soundness error and q is the number of queries. It is incredibly useful to have regularity and reduced degree in PCP. There is an expander-based transformation by Papadimitriou and Yannakakis that transforms any PCP with a constant number of queries and constant soundness error to a regular PCP with constant degree. There are also transformations for low error projection and unique PCPs. Other PCPs are constructed especially to be regular. In this work we show how to regularize and reduce degree of PCPs with a possibly large number of queries and low soundness error. As an application, we prove NP-hardness of an unweighted variant of the collective minimum monotone satisfying assignment problem, which was introduced by Hirahara (FOCS'22) to prove NP-hardness of MCSP^* (the partial function variant of the Minimum Circuit Size Problem) under randomized reductions. We present a simplified proof and sufficient conditions under which MCSP^* is NP-hard under the standard notion of reduction: MCSP^* is NP-hard under deterministic polynomial-time many-one reductions if there exists a function in E that satisfies certain direct sum properties. 
    more » « less
    Free, publicly-accessible full text available November 28, 2024
  4. A Chor–Goldreich (CG) source is a sequence of random variables X = X1 ∘ … ∘ Xt, where each Xi ∼ {0,1}d and Xi has δ d min-entropy conditioned on any fixing of X1 ∘ … ∘ Xi−1. The parameter 0<δ≤ 1 is the entropy rate of the source. We typically think of d as constant and t as growing. We extend this notion in several ways, defining almost CG sources. Most notably, we allow each Xi to only have conditional Shannon entropy δ d. We achieve pseudorandomness results for almost CG sources which were not known to hold even for standard CG sources, and even for the weaker model of Santha–Vazirani sources: We construct a deterministic condenser that on input X, outputs a distribution which is close to having constant entropy gap, namely a distribution Z ∼ {0,1}m for m ≈ δ dt with min-entropy m−O(1). Therefore, we can simulate any randomized algorithm with small failure probability using almost CG sources with no multiplicative slowdown. This result extends to randomized protocols as well, and any setting in which we cannot simply cycle over all seeds, and a “one-shot” simulation is needed. Moreover, our construction works in an online manner, since it is based on random walks on expanders. Our main technical contribution is a novel analysis of random walks, which should be of independent interest. We analyze walks with adversarially correlated steps, each step being entropy-deficient, on good enough lossless expanders. We prove that such walks (or certain interleaved walks on two expanders), starting from a fixed vertex and walking according to X1∘ … ∘ Xt, accumulate most of the entropy in X. 
    more » « less
  5. Existing proofs that deduce BPP = P from circuit lower bounds convert randomized algorithms into deterministic algorithms with a large polynomial slowdown. We convert randomized algorithms into deterministic ones with little slowdown . Specifically, assuming exponential lower bounds against randomized NP ∩ coNP circuits, formally known as randomized SVN circuits, we convert any randomized algorithm over inputs of length n running in time t ≥ n into a deterministic one running in time t 2+α for an arbitrarily small constant α > 0. Such a slowdown is nearly optimal for t close to n , since under standard complexity-theoretic assumptions, there are problems with an inherent quadratic derandomization slowdown. We also convert any randomized algorithm that errs rarely into a deterministic algorithm having a similar running time (with pre-processing). The latter derandomization result holds under weaker assumptions, of exponential lower bounds against deterministic SVN circuits. Our results follow from a new, nearly optimal, explicit pseudorandom generator fooling circuits of size s with seed length (1+α)log s , under the assumption that there exists a function f ∈ E that requires randomized SVN circuits of size at least 2 (1-α′) n , where α = O (α)′. The construction uses, among other ideas, a new connection between pseudoentropy generators and locally list recoverable codes. 
    more » « less
  6. Braverman, Mark (Ed.)
  7. Boja\'{n}czy, Miko{\l}aj; Chekuri, Chandra 
    more » « less
  8. null (Ed.)
  9. null (Ed.)