skip to main content


Title: Stabbing Planes
We develop a new semi-algebraic proof system called Stabbing Planes which formalizes modern branch-and-cut algorithms for integer programming and is in the style of DPLL-based modern SAT solvers. As with DPLL there is only a single rule: the current polytope can be subdivided by branching on an inequality and its “integer negation.” That is, we can (non-deterministically choose) a hyperplane ax ≥ b with integer coefficients, which partitions the polytope into three pieces: the points in the polytope satisfying ax ≥ b, the points satisfying ax ≤ b, and the middle slab b − 1 < ax < b. Since the middle slab contains no integer points it can be safely discarded, and the algorithm proceeds recursively on the other two branches. Each path terminates when the current polytope is empty, which is polynomial-time checkable. Among our results, we show that Stabbing Planes can efficiently simulate the Cutting Planes proof system, and is equivalent to a tree-like variant of the R(CP) system of Krajicek [54]. As well, we show that it possesses short proofs of the canonical family of systems of F_2-linear equations known as the Tseitin formulas. Finally, we prove linear lower bounds on the rank of Stabbing Planes refutations by adapting lower bounds in communication complexity and use these bounds in order to show that Stabbing Planes proofs cannot be balanced.  more » « less
Award ID(s):
2006359 1524246
NSF-PAR ID:
10343001
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
ArXivorg
Volume:
arxiv:2017
Issue:
arxiv:1710.03219v2
ISSN:
2331-8422
Page Range / eLocation ID:
1-29
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Approximate integer programming is the following: For a given convex body$$K \subseteq {\mathbb {R}}^n$$KRn, either determine whether$$K \cap {\mathbb {Z}}^n$$KZnis empty, or find an integer point in the convex body$$2\cdot (K - c) +c$$2·(K-c)+cwhich isK, scaled by 2 from its center of gravityc. Approximate integer programming can be solved in time$$2^{O(n)}$$2O(n)while the fastest known methods for exact integer programming run in time$$2^{O(n)} \cdot n^n$$2O(n)·nn. So far, there are no efficient methods for integer programming known that are based on approximate integer programming. Our main contribution are two such methods, each yielding novel complexity results. First, we show that an integer point$$x^* \in (K \cap {\mathbb {Z}}^n)$$x(KZn)can be found in time$$2^{O(n)}$$2O(n), provided that theremaindersof each component$$x_i^* \mod \ell $$ximodfor some arbitrarily fixed$$\ell \ge 5(n+1)$$5(n+1)of$$x^*$$xare given. The algorithm is based on acutting-plane technique, iteratively halving the volume of the feasible set. The cutting planes are determined via approximate integer programming. Enumeration of the possible remainders gives a$$2^{O(n)}n^n$$2O(n)nnalgorithm for general integer programming. This matches the current best bound of an algorithm by Dadush (Integer programming, lattice algorithms, and deterministic, vol. Estimation. Georgia Institute of Technology, Atlanta, 2012) that is considerably more involved. Our algorithm also relies on a newasymmetric approximate Carathéodory theoremthat might be of interest on its own. Our second method concerns integer programming problems in equation-standard form$$Ax = b, 0 \le x \le u, \, x \in {\mathbb {Z}}^n$$Ax=b,0xu,xZn. Such a problem can be reduced to the solution of$$\prod _i O(\log u_i +1)$$iO(logui+1)approximate integer programming problems. This implies, for example thatknapsackorsubset-sumproblems withpolynomial variable range$$0 \le x_i \le p(n)$$0xip(n)can be solved in time$$(\log n)^{O(n)}$$(logn)O(n). For these problems, the best running time so far was$$n^n \cdot 2^{O(n)}$$nn·2O(n).

     
    more » « less
  2. A major open problem in proof complexity is to prove superpolynomial lower bounds for AC0[p]-Frege proofs. This system is the analog of AC0 [p], the class of bounded depth circuits with prime modular counting gates. Despite strong lower bounds for this class dating back thirty years ([28, 30]), there are no significant lower bounds for AC0 [p]-Frege. Significant and extensive degree lower bounds have been obtained for a variety of subsystems of AC0[p]-Frege, including Nullstellensatz ([3]), Polynomial Calculus ([9]), and SOS ([14]). However to date there has been no progress on AC0 [p]-Frege lower bounds. In this paper we study constant-depth extensions of the Polynomial Calculus [13]. We show that these extensions are much more powerful than was previously known. Our main result is that small depth (≤ 43) Polynomial Calculus (over a sufficiently large field) can polynomially effectively simulate all of the well-studied semialgebraic proof systems: Cutting Planes, Sherali-Adams, Sum-of-Squares (SOS), and Positivstellensatz Calculus (Dynamic SOS). Additionally, they can also quasi-polynomially effectively simulate AC0[q]-Frege for any prime q independent of the characteristic of the underlying field. They can also effectively simulate TC0-Frege if the depth is allowed to grow proportionally. Thus, proving strong lower bounds for constant-depth extensions of Polynomial Calculus would not only give lower bounds for AC0 [p]-Frege, but also for systems as strong as TC0-Frege. 
    more » « less
  3. Abstract

    We study the sparsity of the solutions to systems of linear Diophantine equations with and without non-negativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the$$\ell _0$$0-norm of the vector. Our main results are new improved bounds on the minimal$$\ell _0$$0-norm of solutions to systems$$A\varvec{x}=\varvec{b}$$Ax=b, where$$A\in \mathbb {Z}^{m\times n}$$AZm×n,$${\varvec{b}}\in \mathbb {Z}^m$$bZmand$$\varvec{x}$$xis either a general integer vector (lattice case) or a non-negative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with$$\ell _0$$0-norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over$$\mathbb {R}$$R, to other subdomains such as$$\mathbb {Z}$$Z. We show that these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.

     
    more » « less
  4. We explore algorithms and limitations for sparse optimization problems such as sparse linear regression and robust linear regression. The goal of the sparse linear regression problem is to identify a small number of key features, while the goal of the robust linear regression problem is to identify a small number of erroneous measurements. Specifically, the sparse linear regression problem seeks a k-sparse vector x ∈ Rd to minimize ‖Ax − b‖2, given an input matrix A ∈ Rn×d and a target vector b ∈ Rn, while the robust linear regression problem seeks a set S that ignores at most k rows and a vector x to minimize ‖(Ax − b)S ‖2. We first show bicriteria, NP-hardness of approximation for robust regression building on the work of [OWZ15] which implies a similar result for sparse regression. We further show fine-grained hardness of robust regression through a reduction from the minimum-weight k-clique conjecture. On the positive side, we give an algorithm for robust regression that achieves arbitrarily accurate additive error and uses runtime that closely matches the lower bound from the fine-grained hardness result, as well as an algorithm for sparse regression with similar runtime. Both our upper and lower bounds rely on a general reduction from robust linear regression to sparse regression that we introduce. Our algorithms, inspired by the 3SUM problem, use approximate nearest neighbor data structures and may be of independent interest for solving sparse optimization problems. For instance, we demonstrate that our techniques can also be used for the well-studied sparse PCA problem. 
    more » « less
  5. Query-to-communication lifting theorems, which connect the query complexity of a Boolean function to the communication complexity of an associated “lifted” function obtained by composing the function with many copies of another function known as a gadget, have been instrumental in resolving many open questions in computational complexity. A number of important complexity questions could be resolved if we could make substantial improvements in the input size required for lifting with the Index function, which is a universal gadget for lifting, from its current near-linear size down to polylogarithmic in the number of inputs N of the original function or, ideally, constant. The near-linear size bound was recently shown by Lovett, Meka, Mertz, Pitassi and Zhang [20] using a recent breakthrough improvement on the Sunflower Lemma to show that a certain graph associated with an Index function of that size is a disperser. They also stated a conjecture about the Index function that is essential for further improvements in the size required for lifting with Index using current techniques. In this paper we prove the following; - The conjecture of Lovett et al. is false when the size of the Index gadget is less than logarithmic in N . - The same limitation applies to the Inner-Product function. More precisely, the Inner-Product function, which is known to satisfy the disperser property at size O(log N ), also does not have this property when its size is less than log N . - Notwithstanding the above, we prove a lifting theorem that applies to Index gadgets of any size at least 4 and yields lower bounds for a restricted class of communication protocols in which one of the players is limited to sending parities of its inputs. - Using a modification of the same idea with improved lifting parameters we derive a strong lifting theorem from decision tree size to parity decision tree size. We use this, in turn, to derive a general lifting theorem in proof complexity from tree-resolution size to tree-like Res(⊕) refutation size, which yields many new exponential lower bounds on such proofs. 
    more » « less