skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Thursday, March 12 until 2:00 AM ET on Friday, March 13 due to maintenance. We apologize for the inconvenience.


Search for: All records

Award ID contains: 2200956

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Saraf, Shubhangi (Ed.)
    Error correcting codes encode messages by codewords in such a way that even if some of the codeword is corrupted, the message can be decoded. Typical decoding algorithms for error correcting codes either use linear space or quadratic time. A natural question is whether codes can be decoded in near-linear time and sub-linear space simultaneously. A recent result by Cook and Moshkovitz gave efficient decoders that can uniquely decode Reed-Muller and other codes from a constant fraction (less than half) of corruption. In this work, we address the problem of list decoding in near-linear time and sub-linear space. In the list decoding setting, most of the codeword is corrupted, and one wants to output a short list of potential messages that contains the true message. For any constants γ, τ > 0, we give decoders for Reed-Muller codes that can decode from 1-γ fraction of corruptions in time n^{1+τ} and space n^{τ}. Our decoders work by extending the iterative correction technique of Cook and Moshkovitz. However, that technique, which gradually decreases the number of corruptions in the message, was tailored to the unique decoding setting. We first identify an intermediate problem, codewords list recovery, for which we can make iterative correction work. We then show how to reduce general list decoding to the codewords list recovery problem in efficient time and space. The reduction relies on local correction and testing. In the codewords list recovery problem, the input consists of n unordered lists containing exactly the symbols from L codewords, where a small fraction of the lists is corrupted. The goal is to find the L codewords. In addition, we prove that any linear code with time-space efficient encoding or decoding must be local, in the sense that the codewords satisfy a local linear constraint. This rules out codes like Reed-Solomon from having time-space efficient encoding or decoding. 
    more » « less
  2. Srinivasan, Srikanth (Ed.)
    {"Abstract":["A natural model of a source of randomness consists of a long stream of symbols X = X_1∘…∘X_t, with some guarantee on the entropy of X_i conditioned on the outcome of the prefix x_1,… ,x_{i-1}. We study unpredictable sources, a generalization of the almost Chor-Goldreich (CG) sources considered in [Doron et al., 2023]. In an unpredictable source X, for a typical draw of x ∼ X, for most i-s, the element x_i has a low probability of occurring given x_1,… ,x_{i-1}. Such a model relaxes the often unrealistic assumption of a CG source that for every i, and every x_1,… ,x_{i-1}, the next symbol X_i has sufficiently large entropy. Unpredictable sources subsume all previously considered notions of almost CG sources, including notions that [Doron et al., 2023] failed to analyze, and including those that are equivalent to general sources with high min entropy.\r\nFor a lossless expander G = (V,E) with m = log |V|, we consider a random walk V_0,V_1,…,V_t on G using unpredictable instructions that have sufficient entropy with respect to m. Our main theorem is that for almost all the steps t/2 ≤ i ≤ t in the walk, the vertex V_i is close to a distribution with min-entropy at least m-O(1).\r\nAs a result, we obtain seeded online condensers with constant entropy gap, and seedless (deterministic) condensers outputting a constant fraction of the entropy. In particular, our condensers run in space comparable to the output entropy, as opposed to the size of the stream, and even when the length t of the stream is not known ahead of time. As another corollary, we obtain a new extractor based on expander random walks handling lower entropy than the classic expander based construction relying on spectral techniques [Gillman, 1998].\r\nAs our main technical tool, we provide a novel analysis covering a key case of adversarial random walks on lossless expanders that [Doron et al., 2023] fails to address. As part of the analysis, we provide a "chain rule for vertex probabilities". The standard chain rule states that for every x ∼ X and i, Pr(x_1,… ,x_i) = Pr[X_i = x_i|X_[1,i-1] = x_1,… ,x_{i-1}] ⋅ Pr(x_1,… ,x_{i-1}). If W(x₁,… ,x_i) is the vertex reached using x₁,… ,x_i, then the chain rule for vertex probabilities essentially states that the same phenomena occurs for a typical x: Pr [V_i = W(x_1,… ,x_i)] ≲ Pr[X_i = x_i|X_[1,i-1] = x_1,… ,x_{i-1}] ⋅ Pr[V_{i-1} = W(x_1,… ,x_{i-1})], where V_i is the vertex distribution of the random walk at step i using X."]} 
    more » « less
  3. Megow, Nicole; Smith, Adam (Ed.)
    We prove that for some constant a > 1, for all k ≤ a, MATIME[n^{k(1+o(1))}]/1 ⊄ SIZE[O(n^k)], for some specific o(1) function. This is a super linear polynomial circuit lower bound. Previously, Santhanam [Santhanam, 2007] showed that there exists a constant c>1 such that for all k>1: MATIME[n^{ck}]/1 ⊄ SIZE[O(n^k)]. Inherently to Santhanam’s proof, c is a large constant and there is no upper bound on c. Using ideas from Murray and Williams [Murray and Williams, 2018], one can show for all k>1: MATIME [n^{10 k²}]/1 ⊄ SIZE[O(n^k)]. To prove this result, we construct the first PCP for SPACE[n] with quasi-linear verifier time: our PCP has a Õ(n) time verifier, Õ(n) space prover, O(log(n)) queries, and polynomial alphabet size. Prior to this work, PCPs for SPACE[O(n)] had verifiers that run in Ω(n²) time. This PCP also proves that NE has MIP verifiers which run in time Õ(n). 
    more » « less
  4. Santhanam, Rahul (Ed.)
    We give the first explicit constant rate, constant relative distance, linear codes with an encoder that runs in time n^{1 + o(1)} and space polylog(n) provided random access to the message. Prior to this work, the only such codes were non-explicit, for instance repeat accumulate codes [Divsalar et al., 1998] and the codes described in [Gál et al., 2013]. To construct our codes, we also give explicit, efficiently invertible, lossless condensers with constant entropy gap and polylogarithmic seed length. In contrast to encoders with random access to the message, we show that encoders with sequential access to the message can not run in almost linear time and polylogarithmic space. Our notion of sequential access is much stronger than streaming access. 
    more » « less
  5. Iwata, Satoru; Kakimura, Naonori (Ed.)
    In a regular PCP the verifier queries each proof symbol in the same number of tests. This number is called the degree of the proof, and it is at least 1/(sq) where s is the soundness error and q is the number of queries. It is incredibly useful to have regularity and reduced degree in PCP. There is an expander-based transformation by Papadimitriou and Yannakakis that transforms any PCP with a constant number of queries and constant soundness error to a regular PCP with constant degree. There are also transformations for low error projection and unique PCPs. Other PCPs are constructed especially to be regular. In this work we show how to regularize and reduce degree of PCPs with a possibly large number of queries and low soundness error. As an application, we prove NP-hardness of an unweighted variant of the collective minimum monotone satisfying assignment problem, which was introduced by Hirahara (FOCS'22) to prove NP-hardness of MCSP^* (the partial function variant of the Minimum Circuit Size Problem) under randomized reductions. We present a simplified proof and sufficient conditions under which MCSP^* is NP-hard under the standard notion of reduction: MCSP^* is NP-hard under deterministic polynomial-time many-one reductions if there exists a function in E that satisfies certain direct sum properties. 
    more » « less
  6. Megow, Nicole; Smith, Adam (Ed.)
    The celebrated IP = PSPACE Theorem gives an efficient interactive proof for any bounded-space algorithm. In this work we study interactive proofs for non-deterministic bounded space computations. While Savitch’s Theorem shows that nondeterministic bounded-space algorithms can be simulated by deterministic bounded-space algorithms, this simulation has a quadratic overhead. We give interactive protocols for nondeterministic algorithms directly to get faster verifiers. More specifically, for any non-deterministic space S algorithm, we construct an interactive proof in which the verifier runs in time Õ(n+S²). This improves on the best previous bound of Õ(n+S³) and matches the result for deterministic space bounded algorithms, up to polylog(S) factors. We further generalize to alternating bounded space algorithms. For any language L decided by a time T, space S algorithm that uses d alternations, we construct an interactive proof in which the verifier runs in time Õ(n + S log(T) + S d) and the prover runs in time 2^O(S). For d = O(log(T)), this matches the best known interactive proofs for deterministic algorithms, up to polylog(S) factors, and improves on the previous best verifier time for nondeterministic algorithms by a factor of log(T). We also improve the best prior verifier time for unbounded alternations by a factor of S. Using known connections of bounded alternation algorithms to bounded depth circuits, we also obtain faster verifiers for bounded depth circuits with unbounded fan-in. 
    more » « less
  7. A Chor–Goldreich (CG) source is a sequence of random variables X = X1 ∘ … ∘ Xt, where each Xi ∼ {0,1}d and Xi has δ d min-entropy conditioned on any fixing of X1 ∘ … ∘ Xi−1. The parameter 0<δ≤ 1 is the entropy rate of the source. We typically think of d as constant and t as growing. We extend this notion in several ways, defining almost CG sources. Most notably, we allow each Xi to only have conditional Shannon entropy δ d. We achieve pseudorandomness results for almost CG sources which were not known to hold even for standard CG sources, and even for the weaker model of Santha–Vazirani sources: We construct a deterministic condenser that on input X, outputs a distribution which is close to having constant entropy gap, namely a distribution Z ∼ {0,1}m for m ≈ δ dt with min-entropy m−O(1). Therefore, we can simulate any randomized algorithm with small failure probability using almost CG sources with no multiplicative slowdown. This result extends to randomized protocols as well, and any setting in which we cannot simply cycle over all seeds, and a “one-shot” simulation is needed. Moreover, our construction works in an online manner, since it is based on random walks on expanders. Our main technical contribution is a novel analysis of random walks, which should be of independent interest. We analyze walks with adversarially correlated steps, each step being entropy-deficient, on good enough lossless expanders. We prove that such walks (or certain interleaved walks on two expanders), starting from a fixed vertex and walking according to X1∘ … ∘ Xt, accumulate most of the entropy in X. 
    more » « less