skip to main content


Search for: All records

Award ID contains: 1900460

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. The Exponential-Time Hypothesis ( \(\mathtt {ETH} \) ) is a strengthening of the \(\mathcal {P} \ne \mathcal {NP} \) conjecture, stating that \(3\text{-}\mathtt {SAT} \) on n variables cannot be solved in (uniform) time 2 ϵ · n , for some ϵ > 0. In recent years, analogous hypotheses that are “exponentially-strong” forms of other classical complexity conjectures (such as \(\mathcal {NP}\nsubseteq \mathcal {BPP} \) or \(co\mathcal {NP}\nsubseteq \mathcal {NP} \) ) have also been introduced, and have become widely influential. In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds . We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that: (1) The Randomized Exponential-Time Hypothesis ( \(\mathsf {rETH} \) ) implies that \(\mathcal {BPP} \) can be simulated on “average-case” in deterministic (nearly-)polynomial-time (i.e., in time \(2^{\tilde{O}(\log (n))}=n^{\mathrm{loglog}(n)^{O(1)}} \) ). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i.e., with seed length \(\tilde{O}(\log (n)) \) ); this significantly improves the state-of-the-art in uniform “hardness-to-randomness” results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses. (2) The Non-Deterministic Exponential-Time Hypothesis ( \(\mathsf {NETH} \) ) implies that derandomization of \(\mathcal {BPP} \) is completely equivalent to circuit lower bounds against \(\mathcal {E} \) , and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of \(\mathsf {NETH} \) , and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it. Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if \(\mathtt {CircuitSAT} \) for circuits over n bits of size poly( n ) can be solved by probabilistic algorithms in time 2 n /polylog( n ) , then \(\mathcal {BPE} \) does not have circuits of quasilinear size. 
    more » « less
  2. We present a new technique for efficiently removing almost all short cycles in a graph without unintentionally removing its triangles. Consequently, triangle finding problems do not become easy even in almost k-cycle free graphs, for any constant k≥ 4. Triangle finding is at the base of many conditional lower bounds in P, mainly for distance computation problems, and the existence of many 4- or 5-cycles in a worst-case instance had been the obstacle towards resolving major open questions. Hardness of approximation: Are there distance oracles with m1+o(1) preprocessing time and mo(1) query time that achieve a constant approximation? Existing algorithms with such desirable time bounds only achieve super-constant approximation factors, while only 3− factors were conditionally ruled out (Pătraşcu, Roditty, and Thorup; FOCS 2012). We prove that no O(1) approximations are possible, assuming the 3-SUM or APSP conjectures. In particular, we prove that k-approximations require Ω(m1+1/ck) time, which is tight up to the constant c. The lower bound holds even for the offline version where we are given the queries in advance, and extends to other problems such as dynamic shortest paths. The 4-Cycle problem: An infamous open question in fine-grained complexity is to establish any surprising consequences from a subquadratic or even linear-time algorithm for detecting a 4-cycle in a graph. This is arguably one of the simplest problems without a near-linear time algorithm nor a conditional lower bound. We prove that Ω(m1.1194) time is needed for k-cycle detection for all k≥ 4, unless we can detect a triangle in √n-degree graphs in O(n2−δ) time; a breakthrough that is not known to follow even from optimal matrix multiplication algorithms. 
    more » « less
  3. We study the relation between the query complexity of adaptive and non-adaptive testers in the dense graph model. It has been known for a couple of decades that the query complexity of non-adaptive testers is at most quadratic in the query complexity of adaptive testers. We show that this general result is essentially tight; that is, there exist graph properties for which any non-adaptive tester must have query complexity that is almost quadratic in the query complexity of the best general (i.e., adaptive) tester. More generally, for every q: N→N such that q(n)≤n−−√ and constant c∈[1,2], we show a graph property that is testable in Θ(q(n)) queries, but its non-adaptive query complexity is Θ(q(n)c), omitting poly(log n) factors and ignoring the effect of the proximity parameter ϵ. Furthermore, the upper bounds hold for one-sided error testers, and are at most quadratic in 1/ϵ. These results are obtained through the use of general reductions that transport properties of ordered structured (like bit strings) to those of unordered structures (like unlabeled graphs). The main features of these reductions are query-efficiency and preservation of distance to the properties. This method was initiated in our prior work (ECCC, TR20-149), and we significantly extend it here. 
    more » « less
  4. Braverman, Mark (Ed.)
    Grothendieck’s inequality [Grothendieck, 1953] states that there is an absolute constant K > 1 such that for any n× n matrix A, ‖A‖_{∞→1} := max_{s,t ∈ {± 1}ⁿ}∑_{i,j} A[i,j]⋅s(i)⋅t(j) ≥ 1/K ⋅ max_{u_i,v_j ∈ S^{n-1}}∑_{i,j} A[i,j]⋅⟨u_i,v_j⟩. In addition to having a tremendous impact on Banach space theory, this inequality has found applications in several unrelated fields like quantum information, regularity partitioning, communication complexity, etc. Let K_G (known as Grothendieck’s constant) denote the smallest constant K above. Grothendieck’s inequality implies that a natural semidefinite programming relaxation obtains a constant factor approximation to ‖A‖_{∞ → 1}. The exact value of K_G is yet unknown with the best lower bound (1.67…) being due to Reeds and the best upper bound (1.78…) being due to Braverman, Makarychev, Makarychev and Naor [Braverman et al., 2013]. In contrast, the little Grothendieck inequality states that under the assumption that A is PSD the constant K above can be improved to π/2 and moreover this is tight. The inapproximability of ‖A‖_{∞ → 1} has been studied in several papers culminating in a tight UGC-based hardness result due to Raghavendra and Steurer (remarkably they achieve this without knowing the value of K_G). Briet, Regev and Saket [Briët et al., 2015] proved tight NP-hardness of approximating the little Grothendieck problem within π/2, based on a framework by Guruswami, Raghavendra, Saket and Wu [Guruswami et al., 2016] for bypassing UGC for geometric problems. This also remained the best known NP-hardness for the general Grothendieck problem due to the nature of the Guruswami et al. framework, which utilized a projection operator onto the degree-1 Fourier coefficients of long code encodings, which naturally yielded a PSD matrix A. We show how to extend the above framework to go beyond the degree-1 Fourier coefficients, using the global structure of optimal solutions to the Grothendieck problem. As a result, we obtain a separation between the NP-hardness results for the two problems, obtaining an inapproximability result for the Grothendieck problem, of a factor π/2 + ε₀ for a fixed constant ε₀ > 0. 
    more » « less
  5. Braverman, Mark (Ed.)
    We further the study of supercritical tradeoffs in proof and circuit complexity, which is a type of tradeoff between complexity parameters where restricting one complexity parameter forces another to exceed its worst-case upper bound. In particular, we prove a new family of supercritical tradeoffs between depth and size for Resolution, Res(k), and Cutting Planes proofs. For each of these proof systems we construct, for each c ≤ n^{1-ε}, a formula with n^{O(c)} clauses and n variables that has a proof of size n^{O(c)} but in which any proof of size no more than roughly exponential in n^{1-ε}/c must necessarily have depth ≈ n^c. By setting c = o(n^{1-ε}) we therefore obtain exponential lower bounds on proof depth; this far exceeds the trivial worst-case upper bound of n. In doing so we give a simplified proof of a supercritical depth/width tradeoff for tree-like Resolution from [Alexander A. Razborov, 2016]. Finally, we outline several conjectures that would imply similar supercritical tradeoffs between size and depth in circuit complexity via lifting theorems. 
    more » « less
  6. Braverman, Mark (Ed.)
    A longstanding open problem in coding theory is to determine the best (asymptotic) rate R₂(δ) of binary codes with minimum constant (relative) distance δ. An existential lower bound was given by Gilbert and Varshamov in the 1950s. On the impossibility side, in the 1970s McEliece, Rodemich, Rumsey and Welch (MRRW) proved an upper bound by analyzing Delsarte’s linear programs. To date these results remain the best known lower and upper bounds on R₂(δ) with no improvement even for the important class of linear codes. Asymptotically, these bounds differ by an exponential factor in the blocklength. In this work, we introduce a new hierarchy of linear programs (LPs) that converges to the true size A^{Lin}₂(n,d) of an optimum linear binary code (in fact, over any finite field) of a given blocklength n and distance d. This hierarchy has several notable features: 1) It is a natural generalization of the Delsarte LPs used in the first MRRW bound. 2) It is a hierarchy of linear programs rather than semi-definite programs potentially making it more amenable to theoretical analysis. 3) It is complete in the sense that the optimum code size can be retrieved from level O(n²). 4) It provides an answer in the form of a hierarchy (in larger dimensional spaces) to the question of how to cut Delsarte’s LP polytopes to approximate the true size of linear codes. We obtain our hierarchy by generalizing the Krawtchouk polynomials and MacWilliams inequalities to a suitable "higher-order" version taking into account interactions of 𝓁 words. Our method also generalizes to translation schemes under mild assumptions. 
    more » « less
  7. Braverman, Mark (Ed.)
    For an abelian group H acting on the set [𝓁], an (H,𝓁)-lift of a graph G₀ is a graph obtained by replacing each vertex by 𝓁 copies, and each edge by a matching corresponding to the action of an element of H. Expanding graphs obtained via abelian lifts, form a key ingredient in the recent breakthrough constructions of quantum LDPC codes, (implicitly) in the fiber bundle codes by Hastings, Haah and O'Donnell [STOC 2021] achieving distance Ω̃(N^{3/5}), and in those by Panteleev and Kalachev [IEEE Trans. Inf. Theory 2021] of distance Ω(N/log(N)). However, both these constructions are non-explicit. In particular, the latter relies on a randomized construction of expander graphs via abelian lifts by Agarwal et al. [SIAM J. Discrete Math 2019]. In this work, we show the following explicit constructions of expanders obtained via abelian lifts. For every (transitive) abelian group H ⩽ Sym(𝓁), constant degree d ≥ 3 and ε > 0, we construct explicit d-regular expander graphs G obtained from an (H,𝓁)-lift of a (suitable) base n-vertex expander G₀ with the following parameters: ii) λ(G) ≤ 2√{d-1} + ε, for any lift size 𝓁 ≤ 2^{n^{δ}} where δ = δ(d,ε), iii) λ(G) ≤ ε ⋅ d, for any lift size 𝓁 ≤ 2^{n^{δ₀}} for a fixed δ₀ > 0, when d ≥ d₀(ε), or iv) λ(G) ≤ Õ(√d), for lift size "exactly" 𝓁 = 2^{Θ(n)}. As corollaries, we obtain explicit quantum lifted product codes of Panteleev and Kalachev of almost linear distance (and also in a wide range of parameters) and explicit classical quasi-cyclic LDPC codes with wide range of circulant sizes. Items (i) and (ii) above are obtained by extending the techniques of Mohanty, O'Donnell and Paredes [STOC 2020] for 2-lifts to much larger abelian lift sizes (as a byproduct simplifying their construction). This is done by providing a new encoding of special walks arising in the trace power method, carefully "compressing" depth-first search traversals. Result (iii) is via a simpler proof of Agarwal et al. [SIAM J. Discrete Math 2019] at the expense of polylog factors in the expansion. 
    more » « less
  8. Kabanets, Valentine (Ed.)
    We study pseudo-deterministic query complexity - randomized query algorithms that are required to output the same answer with high probability on all inputs. We prove Ω(√n) lower bounds on the pseudo-deterministic complexity of a large family of search problems based on unsatisfiable random CNF instances, and also for the promise problem (FIND1) of finding a 1 in a vector populated with at least half one’s. This gives an exponential separation between randomized query complexity and pseudo-deterministic complexity, which is tight in the quantum setting. As applications we partially solve a related combinatorial coloring problem, and we separate random tree-like Resolution from its pseudo-deterministic version. In contrast to our lower bound, we show, surprisingly, that in the zero-error, average case setting, the three notions (deterministic, randomized, pseudo-deterministic) collapse. 
    more » « less