skip to main content


Title: Diameters of Cocircuit Graphs of Oriented Matroids: An Update
Oriented matroids are combinatorial structures that generalize point configurations, vector configurations, hyperplane arrangements, polyhedra, linear programs, and directed graphs. Oriented matroids have played a key  role in combinatorics, computational geometry, and optimization. This paper surveys prior work and presents an update on the search for bounds on the diameter of the cocircuit graph of an oriented matroid. The motivation for our investigations is the complexity of the simplex method and the criss-cross method. We review the diameter problem and show the diameter bounds of general oriented matroids reduce to those of uniform oriented matroids. We give the latest exact bounds for oriented matroids of low rank and low corank, and for all oriented matroids with up to nine elements (this part required a large computer-based proof).  For arbitrary oriented matroids, we present an improvement to a quadratic bound of Finschi. Our discussion highlights an old conjecture that states a linear bound for the diameter is possible. On the positive side, we show the conjecture is true for oriented matroids of low rank and low corank, and, verified with computers, for all oriented matroids with up to nine elements. On the negative side, our computer search showed two natural strengthenings of the main conjecture are false.  more » « less
Award ID(s):
1934568
NSF-PAR ID:
10348839
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
The Electronic Journal of Combinatorics
Volume:
28
Issue:
4
ISSN:
1077-8926
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We study the log-rank conjecture from the perspective of point-hyperplane incidence geometry. We formulate the following conjecture: Given a point set in ℝ d that is covered by constant-sized sets of parallel hyperplanes, there exists an affine subspace that accounts for a large (i.e., 2 –polylog( d ) ) fraction of the incidences, in the sense of containing a large fraction of the points and being contained in a large fraction of the hyperplanes. In other words, the point-hyperplane incidence graph for such configurations has a large complete bipartite subgraph. Alternatively, our conjecture may be interpreted linear-algebraically as follows: Any rank- d matrix containing at most O (1) distinct entries in each column contains a submatrix of fractional size 2 –polylog( d ) , in which each column is constant. We prove that our conjecture is equivalent to the log-rank conjecture; the crucial ingredient of this proof is a reduction from bounds for parallel k -partitions to bounds for parallel ( k -1)-partitions. We also introduce an (apparent) strengthening of the conjecture, which relaxes the requirements that the sets of hyperplanes be parallel. Motivated by the connections above, we revisit well-studied questions in point-hyperplane incidence geometry without structural assumptions (i.e., the existence of partitions). We give an elementary argument for the existence of complete bipartite subgraphs of density Ω (ε 2 d / d ) in any d -dimensional configuration with incidence density ε, qualitatively matching previous results proved using sophisticated geometric techniques. We also improve an upper-bound construction of Apfelbaum and Sharir [ 2 ], yielding a configuration whose complete bipartite subgraphs are exponentially small and whose incidence density is Ω (1/√ d ). Finally, we discuss various constructions (due to others) of products of Boolean matrices which yield configurations with incidence density Ω (1) and complete bipartite subgraph density 2 -Ω (√ d ) , and pose several questions for this special case in the alternative language of extremal set combinatorics. Our framework and results may help shed light on the difficulty of improving Lovett’s Õ(√ rank( f )) bound [ 20 ] for the log-rank conjecture. In particular, any improvement on this bound would imply the first complete bipartite subgraph size bounds for parallel 3-partitioned configurations which beat our generic bounds for unstructured configurations. 
    more » « less
  2. We show that there is an equation of degree at most poly( n ) for the (Zariski closure of the) set of the non-rigid matrices: That is, we show that for every large enough field 𝔽, there is a non-zero n 2 -variate polynomial P ε 𝔽[ x 1, 1 , ..., x n, n ] of degree at most poly( n ) such that every matrix M that can be written as a sum of a matrix of rank at most n /100 and a matrix of sparsity at most n 2 /100 satisfies P(M) = 0. This confirms a conjecture of Gesmundo, Hauenstein, Ikenmeyer, and Landsberg [ 9 ] and improves the best upper bound known for this problem down from exp ( n 2 ) [ 9 , 12 ] to poly( n ). We also show a similar polynomial degree bound for the (Zariski closure of the) set of all matrices M such that the linear transformation represented by M can be computed by an algebraic circuit with at most n 2 /200 edges (without any restriction on the depth). As far as we are aware, no such bound was known prior to this work when the depth of the circuits is unbounded. Our methods are elementary and short and rely on a polynomial map of Shpilka and Volkovich [ 21 ] to construct low-degree “universal” maps for non-rigid matrices and small linear circuits. Combining this construction with a simple dimension counting argument to show that any such polynomial map has a low-degree annihilating polynomial completes the proof. As a corollary, we show that any derandomization of the polynomial identity testing problem will imply new circuit lower bounds. A similar (but incomparable) theorem was proved by Kabanets and Impagliazzo [ 11 ]. 
    more » « less
  3. null (Ed.)
    The Sparsest Cut is a fundamental optimization problem that have been extensively studied. For planar inputs the problem is in P and can be solved in Õ(n 3 ) time if all vertex weights are 1. Despite a significant amount of effort, the best algorithms date back to the early 90’s and can only achieve O(log n)-approximation in Õ(n) time or 3.5-approximation in Õ(n 2 ) time [Rao, STOC92]. Our main result is an Ω(n 2−ε ) lower bound for Sparsest Cut even in planar graphs with unit vertex weights, under the (min, +)-Convolution conjecture, showing that approxima- tions are inevitable in the near-linear time regime. To complement the lower bound, we provide a 3.3-approximation in near-linear time, improving upon the 25-year old result of Rao in both time and accuracy. We also show that our lower bound is not far from optimal by observing an exact algorithm with running time Õ(n 5/2 ) improving upon the Õ(n 3 ) algorithm of Park and Phillips [STOC93]. Our lower bound accomplishes a repeatedly raised challenge by being the first fine-grained lower bound for a natural planar graph problem in P. Building on our construction we prove near-quadratic lower bounds under SETH for variants of the closest pair problem in planar graphs, and use them to show that the popular Average-Linkage procedure for Hierarchical Clustering cannot be simulated in truly subquadratic time. At the core of our constructions is a diamond-like gadget that also settles the complexity of Diameter in distributed planar networks. We prove an Ω(n/ log n) lower bound on the number of communication rounds required to compute the weighted diameter of a network in the CONGET model, even when the underlying graph is planar and all nodes are D = 4 hops away from each other. This is the first poly(n) lower bound in the planar-distributed setting, and it complements the recent poly(D, log n) upper bounds of Li and Parter [STOC 2019] for (exact) unweighted diameter and for (1 + ε) approximate weighted diameter. 
    more » « less
  4. Ta-Shma, Amnon (Ed.)
    The Tensor Isomorphism problem (TI) has recently emerged as having connections to multiple areas of research within complexity and beyond, but the current best upper bound is essentially the brute force algorithm. Being an algebraic problem, TI (or rather, proving that two tensors are non-isomorphic) lends itself very naturally to algebraic and semi-algebraic proof systems, such as the Polynomial Calculus (PC) and Sum of Squares (SoS). For its combinatorial cousin Graph Isomorphism, essentially optimal lower bounds are known for approaches based on PC and SoS (Berkholz & Grohe, SODA '17). Our main results are an Ω(n) lower bound on PC degree or SoS degree for Tensor Isomorphism, and a nontrivial upper bound for testing isomorphism of tensors of bounded rank. We also show that PC cannot perform basic linear algebra in sub-linear degree, such as comparing the rank of two matrices (which is essentially the same as 2-TI), or deriving BA=I from AB=I. As linear algebra is a key tool for understanding tensors, we introduce a strictly stronger proof system, PC-Inv, which allows as derivation rules all substitution instances of the implication AB=I → BA=I. We conjecture that even PC-Inv cannot solve TI in polynomial time either, but leave open getting lower bounds on PC-Inv for any system of equations, let alone those for TI. We also highlight many other open questions about proof complexity approaches to TI. 
    more » « less
  5. We initiate the study of biologically-inspired spiking neural networks from the perspective of streaming algorithms. Like computers, human brains face memory limitations, which pose a significant obstacle when processing large scale and dynamically changing data. In computer science, these challenges are captured by the well-known streaming model, which can be traced back to Munro and Paterson `78 and has had significant impact in theory and beyond. In the classical streaming setting, one must compute a function f of a stream of updates 𝒮 = {u₁,…,u_m}, given restricted single-pass access to the stream. The primary complexity measure is the space used by the algorithm. In contrast to the large body of work on streaming algorithms, relatively little is known about the computational aspects of data processing in spiking neural networks. In this work, we seek to connect these two models, leveraging techniques developed for streaming algorithms to better understand neural computation. Our primary goal is to design networks for various computational tasks using as few auxiliary (non-input or output) neurons as possible. The number of auxiliary neurons can be thought of as the "space" required by the network. Previous algorithmic work in spiking neural networks has many similarities with streaming algorithms. However, the connection between these two space-limited models has not been formally addressed. We take the first steps towards understanding this connection. On the upper bound side, we design neural algorithms based on known streaming algorithms for fundamental tasks, including distinct elements, approximate median, and heavy hitters. The number of neurons in our solutions almost match the space bounds of the corresponding streaming algorithms. As a general algorithmic primitive, we show how to implement the important streaming technique of linear sketching efficiently in spiking neural networks. On the lower bound side, we give a generic reduction, showing that any space-efficient spiking neural network can be simulated by a space-efficient streaming algorithm. This reduction lets us translate streaming-space lower bounds into nearly matching neural-space lower bounds, establishing a close connection between the two models. 
    more » « less