skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, February 13 until 2:00 AM ET on Friday, February 14 due to maintenance. We apologize for the inconvenience.


Title: Polynomial-Time Axioms of Choice and Polynomial-Time Cardinality
Abstract

There is no single canonical polynomial-time version of the Axiom of Choice (AC); several statements of AC that are equivalent in Zermelo-Fraenkel (ZF) set theory are already inequivalent from a constructive point of view, and are similarly inequivalent from a complexity-theoretic point of view. In this paper we show that many classical formulations of AC, when restricted to polynomial time in natural ways, are equivalent to standard complexity-theoretic hypotheses, including several that were of interest to Selman. This provides a unified view of these hypotheses, and we hope provides additional motivation for studying some of the lesser-known hypotheses that appear here. Additionally, because several classical forms of AC are formulated in terms of cardinals, we develop a theory of polynomial-time cardinality. Nerode & Remmel (Contemp. Math.106, 1990 and Springer Lec. Notes Math. 1432, 1990) developed a related theory, but restricted to unary sets. Downey (Math. Reviews MR1071525) suggested that such a theory over larger alphabets could have interesting connections to more standard complexity questions, and we illustrate some of those connections here. The connections between AC, cardinality, and complexity questions also allow us to highlight some of Selman’s work. We hope this paper is more of a beginning than an end, introducing new concepts and raising many new questions, ripe for further research.

 
more » « less
Award ID(s):
2047756
PAR ID:
10413665
Author(s) / Creator(s):
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Theory of Computing Systems
Volume:
67
Issue:
3
ISSN:
1432-4350
Format(s):
Medium: X Size: p. 627-669
Size(s):
p. 627-669
Sponsoring Org:
National Science Foundation
More Like this
  1. In the Maximum Independent Set problem we are asked to find a set of pairwise nonadjacent vertices in a given graph with the maximum possible cardinality. In general graphs, this classical problem is known to be NP-hard and hard to approximate within a factor of n1−ε for any ε > 0. Due to this, investigating the complexity of Maximum Independent Set in various graph classes in hope of finding better tractability results is an active research direction. In H-free graphs, that is, graphs not containing a fixed graph H as an induced subgraph, the problem is known to remain NP-hard and APX-hard whenever H contains a cycle, a vertex of degree at least four, or two vertices of degree at least three in one connected component. For the remaining cases, where every component of H is a path or a subdivided claw, the complexity of Maximum Independent Set remains widely open, with only a handful of polynomial-time solvability results for small graphs H such as P5, P6, the claw, or the fork. We prove that for every such “possibly tractable” graph H there exists an algorithm that, given an H-free graph G and an accuracy parameter ε > 0, finds an independent set in G of cardinality within a factor of (1 – ε) of the optimum in time exponential in a polynomial of log | V(G) | and ε−1. That is, we show that for every graph H for which Maximum Independent Set is not known to be APX-hard in H-free graphs, the problem admits a quasi-polynomial time approximation scheme in this graph class. Our algorithm works also in the more general weighted setting, where the input graph is supplied with a weight function on vertices and we are maximizing the total weight of an independent set. 
    more » « less
  2. Servedio, Rocco (Ed.)
    We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time, revisiting four foundational results: two worst-case to average-case reductions and two protocols. We also show a novel protocol. 1. We prove that secret-key cryptography exists if O˜(n‾√)-approximate SVP is hard for 2εn-time algorithms. I.e., we extend to our setting (Micciancio and Regev's improved version of) Ajtai's celebrated polynomial-time worst-case to average-case reduction from O˜(n)-approximate SVP to SIS. 2. We prove that public-key cryptography exists if O˜(n)-approximate SVP is hard for 2εn-time algorithms. This extends to our setting Regev's celebrated polynomial-time worst-case to average-case reduction from O˜(n1.5)-approximate SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical, generalizing Peikert's polynomial-time classical reduction from O˜(n2)-approximate SVP. 3. We show a 2εn-time coAM protocol for O(1)-approximate CVP, generalizing the celebrated polynomial-time protocol for O(n/logn‾‾‾‾‾‾‾√)-CVP due to Goldreich and Goldwasser. These results show complexity-theoretic barriers to extending the recent line of fine-grained hardness results for CVP and SVP to larger approximation factors. (This result also extends to arbitrary norms.) 4. We show a 2εn-time co-non-deterministic protocol for O(logn‾‾‾‾‾√)-approximate SVP, generalizing the (also celebrated!) polynomial-time protocol for O(n‾√)-CVP due to Aharonov and Regev. 5. We give a novel coMA protocol for O(1)-approximate CVP with a 2εn-time verifier. All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs. 
    more » « less
  3. null (Ed.)
    We develop exact representations of training twolayer neural networks with rectified linear units (ReLUs) in terms of a single convex program with number of variables polynomial in the number of training samples and the number of hidden neurons. Our theory utilizes semi-infinite duality and minimum norm regularization. We show that ReLU networks trained with standard weight decay are equivalent to block `1 penalized convex models. Moreover, we show that certain standard convolutional linear networks are equivalent semidefinite programs which can be simplified to `1 regularized linear models in a polynomial sized discrete Fourier feature space. 
    more » « less
  4. Peter Burgisser (Ed.)
    Suppose A = {a 1 , . . . , a n+2 } ⊂ Z n has cardinality n + 2, with all the coordinates of the a j having absolute value at most d, and the a j do not all lie in the same affine hyperplane. Suppose F = ( f 1 , . . . , f n ) is an n × n polynomial system with generic integer coefficients at most H in absolute value, and A the union of the sets of exponent vectors of the f i . We give the first algorithm that, for any fixed n, counts exactly the number of real roots of F in time polynomial in log(dH ). We also discuss a number- theoretic hypothesis that would imply a further speed-up to time polynomial in n as well. 
    more » « less
  5. The Exponential-Time Hypothesis ( \(\mathtt {ETH} \) ) is a strengthening of the \(\mathcal {P} \ne \mathcal {NP} \) conjecture, stating that \(3\text{-}\mathtt {SAT} \) on n variables cannot be solved in (uniform) time 2 ϵ · n , for some ϵ > 0. In recent years, analogous hypotheses that are “exponentially-strong” forms of other classical complexity conjectures (such as \(\mathcal {NP}\nsubseteq \mathcal {BPP} \) or \(co\mathcal {NP}\nsubseteq \mathcal {NP} \) ) have also been introduced, and have become widely influential. In this work, we focus on the interaction of exponential-time hypotheses with the fundamental and closely-related questions of derandomization and circuit lower bounds . We show that even relatively-mild variants of exponential-time hypotheses have far-reaching implications to derandomization, circuit lower bounds, and the connections between the two. Specifically, we prove that: (1) The Randomized Exponential-Time Hypothesis ( \(\mathsf {rETH} \) ) implies that \(\mathcal {BPP} \) can be simulated on “average-case” in deterministic (nearly-)polynomial-time (i.e., in time \(2^{\tilde{O}(\log (n))}=n^{\mathrm{loglog}(n)^{O(1)}} \) ). The derandomization relies on a conditional construction of a pseudorandom generator with near-exponential stretch (i.e., with seed length \(\tilde{O}(\log (n)) \) ); this significantly improves the state-of-the-art in uniform “hardness-to-randomness” results, which previously only yielded pseudorandom generators with sub-exponential stretch from such hypotheses. (2) The Non-Deterministic Exponential-Time Hypothesis ( \(\mathsf {NETH} \) ) implies that derandomization of \(\mathcal {BPP} \) is completely equivalent to circuit lower bounds against \(\mathcal {E} \) , and in particular that pseudorandom generators are necessary for derandomization. In fact, we show that the foregoing equivalence follows from a very weak version of \(\mathsf {NETH} \) , and we also show that this very weak version is necessary to prove a slightly stronger conclusion that we deduce from it. Lastly, we show that disproving certain exponential-time hypotheses requires proving breakthrough circuit lower bounds. In particular, if \(\mathtt {CircuitSAT} \) for circuits over n bits of size poly( n ) can be solved by probabilistic algorithms in time 2 n /polylog( n ) , then \(\mathcal {BPE} \) does not have circuits of quasilinear size. 
    more » « less