skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Towards Stronger Depth Lower Bounds
A fundamental problem in circuit complexity is to find explicit functions that require large depth to compute. When considering the natural DeMorgan basis of {OR,AND}, where negations incur no cost, the best known depth lower bounds for an explicit function in NP have the form (3-o(1))log₂ n, established by Håstad (building on others) in the early 1990s. We make progress on the problem of improving this factor of 3, in two different ways: - We consider an "algorithmic method" approach to proving stronger depth lower bounds for non-uniform circuits in the DeMorgan basis. We show that slightly faster algorithms (than what is known) for counting the number of satisfying assignments on subcubic-size DeMorgan formulas would imply supercubic-size DeMorgan formula lower bounds, implying that the depth must be at least (3+ε)log₂ n for some ε > 0. For example, if #SAT on formulas of size n^{2+2ε} can be solved in 2^{n - n^{1-ε}log^k n} time for some ε > 0 and a sufficiently large constant k, then there is a function computable in 2^{O(n)} time with a SAT oracle which does not have n^{3+ε}-size formulas. In fact, the #SAT algorithm only has to work on formulas that are a conjunction of n^{1-ε} subformulas, each of which is n^{1+3ε} size, in order to obtain the supercubic lower bound. As a proof of concept, we show that our new algorithms-to-lower-bounds connection can be applied to prove new lower bounds for "hybrid" DeMorgan formula models which compute interesting functions at their leaves. - Turning to the {NAND} basis, we establish a greater-than-(3 log₂ n) depth lower bound against uniform circuits solving the SAT problem, using an extension of the "indirect diagonalization" method for NAND formulas. Note that circuits over the NAND basis are a special case of circuits over the DeMorgan basis; however, hard functions such as Andreev’s function (known to require depth (3-o(1))log₂ n in the DeMorgan basis) can still be computed with NAND circuits of depth (3+o(1))log₂ n. Our results imply that SAT requires polylogtime-uniform NAND circuits of depth at least 3.603 log₂ n.  more » « less
Award ID(s):
2127597
PAR ID:
10579532
Author(s) / Creator(s):
;
Editor(s):
Guruswami, Venkatesan
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Volume:
287
ISSN:
1868-8969
ISBN:
978-3-95977-309-6
Page Range / eLocation ID:
10:1-10:24
Subject(s) / Keyword(s):
DeMorgan formulas depth complexity circuit complexity lower bounds #SAT NAND gates SAT Theory of computation → Computational complexity and cryptography
Format(s):
Medium: X Size: 24 pages; 984589 bytes Other: application/pdf
Size(s):
24 pages 984589 bytes
Right(s):
Creative Commons Attribution 4.0 International license; info:eu-repo/semantics/openAccess
Sponsoring Org:
National Science Foundation
More Like this
  1. Santhanam, Rahul (Ed.)
    Depth-3 circuit lower bounds and k-SAT algorithms are intimately related; the state-of-the-art Σ^k_3-circuit lower bound (Or-And-Or circuits with bottom fan-in at most k) and the k-SAT algorithm of Paturi, Pudlák, Saks, and Zane (J. ACM'05) are based on the same combinatorial theorem regarding k-CNFs. In this paper we define a problem which reveals new interactions between the two, and suggests a concrete approach to significantly stronger circuit lower bounds and improved k-SAT algorithms. For a natural number k and a parameter t, we consider the Enum(k, t) problem defined as follows: given an n-variable k-CNF and an initial assignment α, output all satisfying assignments at Hamming distance t(n) of α, assuming that there are no satisfying assignments of Hamming distance less than t(n) of α. We observe that an upper bound b(n, k, t) on the complexity of Enum(k, t) simultaneously implies depth-3 circuit lower bounds and k-SAT algorithms: - Depth-3 circuits: Any Σ^k_3 circuit computing the Majority function has size at least binom(n,n/2)/b(n, k, n/2). - k-SAT: There exists an algorithm solving k-SAT in time O(∑_{t=1}^{n/2}b(n, k, t)). A simple construction shows that b(n, k, n/2) ≥ 2^{(1 - O(log(k)/k))n}. Thus, matching upper bounds for b(n, k, n/2) would imply a Σ^k_3-circuit lower bound of 2^Ω(log(k)n/k) and a k-SAT upper bound of 2^{(1 - Ω(log(k)/k))n}. The former yields an unrestricted depth-3 lower bound of 2^ω(√n) solving a long standing open problem, and the latter breaks the Super Strong Exponential Time Hypothesis. In this paper, we propose a randomized algorithm for Enum(k, t) and introduce new ideas to analyze it. We demonstrate the power of our ideas by considering the first non-trivial instance of the problem, i.e., Enum(3, n/2). We show that the expected running time of our algorithm is 1.598ⁿ, substantially improving on the trivial bound of 3^{n/2} ≃ 1.732ⁿ. This already improves Σ^3_3 lower bounds for Majority function to 1.251ⁿ. The previous bound was 1.154ⁿ which follows from the work of Håstad, Jukna, and Pudlák (Comput. Complex.'95). By restricting ourselves to monotone CNFs, Enum(k, t) immediately becomes a hypergraph Turán problem. Therefore our techniques might be of independent interest in extremal combinatorics. 
    more » « less
  2. Barman, Siddharth; Lasota, Sławomir (Ed.)
    We consider the problem of estimating the size of a maximum matching in low-arboricity graphs in the dynamic graph stream model. In this setting, an algorithm with limited memory makes multiple passes over a stream of edge insertions and deletions, resulting in a low-arboricity graph. Let n be the number of vertices of the input graph, and α be its arboricity. We give the following results. 1) As our main result, we give a three-pass streaming algorithm that produces an (α + 2)(1 + ε)-approximation and uses space O(ε^{-2}⋅α²⋅n^{1/2}⋅log n). This result should be contrasted with the Ω(α^{-5/2}⋅n^{1/2}) space lower bound established by [Assadi et al., SODA'17] for one-pass algorithms, showing that, for graphs of constant arboricity, the one-pass space lower bound can be achieved in three passes (up to poly-logarithmic factors). Furthermore, we obtain a two-pass algorithm that uses space O(ε^{-2}⋅α²⋅n^{3/5}⋅log n). 2) We also give a (1+ε)-approximation multi-pass algorithm, where the space used is parameterized by an upper bound on the size of a largest matching. For example, using O(log log n) passes, the space required is O(ε^{-1}⋅α²⋅k⋅log n), where k denotes an upper bound on the size of a largest matching. Finally, we define a notion of arboricity in the context of matrices. This is a natural measure of the sparsity of a matrix that is more nuanced than simply bounding the total number of nonzero entries, but less restrictive than bounding the number of nonzero entries in each row and column. For such matrices, we exploit our results on estimating matching size to present upper bounds for the problem of rank estimation in the dynamic data stream model. 
    more » « less
  3. One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer from a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don't know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs that are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent Gamma if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p^{Gamma + o(1)}. Our PRG uses a seed of length s^{1/(Gamma + 1) + o(1)} to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s^{1/3+o(1)}; 2. For formulas over an arbitrary basis, seed length s^{1/2+o(1)}; 3. For read-once de Morgan formulas, seed length s^{.234...}; 4. For branching programs of size s, seed length s^{1/2+o(1)}. The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only when the size s=O(n). 
    more » « less
  4. One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer from a quantitative loss in parameters, and hence do not give nontrivial implications for models where we don’t know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent Γ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p Γ + o (1) . Our PRG uses a seed of length s 1/(Γ + 1) + o (1) to fool circuits in the family of size s . By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: (1) For de Morgan formulas, seed length s 1/3+ o (1) ; (2) For formulas over an arbitrary basis, seed length s 1/2+ o (1) ; (3) For read-once de Morgan formulas, seed length s .234... ; (4) For branching programs of size s , seed length s 1/2+ o (1) . The previous best PRGs known for these classes used seeds of length bigger than n /2 to output n bits, and worked only for size s = O ( n ) [8]. 
    more » « less
  5. We present new constructions of pseudorandom generators (PRGs) for two of the most widely studied non-uniform circuit classes in complexity theory. Our main result is a construction of the first non-trivial PRG for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth d∈N and n1+δ wires, where δ=2−O(d) , using seed length O(n1−δ) and with error 2−nδ . This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds. Our second contribution is a PRG for De Morgan formulas of size s whose seed length is s1/3+o(1)⋅polylog(1/ϵ) for error ϵ . In particular, our PRG can fool formulas of sub-cubic size s=n3−Ω(1) with an exponentially small error ϵ=exp(−nΩ(1)) . This significantly improves the inverse-polynomial error of the previous state-of-the-art for such formulas by Impagliazzo, Meka, and Zuckerman (FOCS 2012, JACM 2019), and again tightly matches the best currently-known lower bounds for this class. In both settings, a key ingredient in our constructions is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural “hybrid computational model” that combines several computational models. 
    more » « less