In the k -cut problem, we want to find the lowest-weight set of edges whose deletion breaks a given (multi)graph into k connected components. Algorithms of Karger and Stein can solve this in roughly O ( n 2k ) time. However, lower bounds from conjectures about the k -clique problem imply that Ω ( n (1- o (1)) k ) time is likely needed. Recent results of Gupta, Lee, and Li have given new algorithms for general k -cut in n 1.98k + O(1) time, as well as specialized algorithms with better performance for certain classes of graphs (e.g., for small integer edge weights). In this work, we resolve the problem for general graphs. We show that the Contraction Algorithm of Karger outputs any fixed k -cut of weight α λ k with probability Ω k ( n - α k ), where λ k denotes the minimum k -cut weight. This also gives an extremal bound of O k ( n k ) on the number of minimum k -cuts and an algorithm to compute λ k with roughly n k polylog( n ) runtime. Both are tight up to lower-order factors, with the algorithmic lower bound assuming hardness of max-weight k -clique. The first main ingredient in our result is an extremal bound on the number of cuts of weight less than 2 λ k / k , using the Sunflower lemma. The second ingredient is a fine-grained analysis of how the graph shrinks—and how the average degree evolves—in the Karger process.
more »
« less
Root system chip-firing
Propp recently introduced a variant of chip-firing on the infinite path where the chips are given distinct integer labels and conjectured that this process is confluent from certain (but not all) initial configurations of chips. Hopkins, McConville, and Propp proved Propp's confluence conjecture. We recast this result in terms of root systems: the labeled chip-firing game can be seen as a process which allows replacing an integer vector λ by λ+α whenever λ is orthogonal to α, for α a positive root of a root system of Type A. We give conjectures about confluence for this process in the general setting of an arbitrary root system. We show that the process is always confluent from any initial point after modding out by the action of the Weyl group (an analog of unlabeled chip-firing in arbitrary type). We also study some remarkable deformations of this process which are confluent from any initial point. For these deformations, the set of weights with given stabilization has an interesting geometric structure related to permutohedra. This geometric structure leads us to define certain `Ehrhart-like' polynomials that conjecturally have nonnegative integer coefficients.
more »
« less
- Award ID(s):
- 1764370
- PAR ID:
- 10111164
- Date Published:
- Journal Name:
- Séminaire lotharingien de combinatoire
- Volume:
- 80B
- ISSN:
- 1286-4889
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper is inspired by Richards' work on large gaps between sums of two squares [10]. It is shown in [10] that there exist arbitrarily large values of λ and μ, with certain properties, do not contain any sums of two squares. Geometrically, these gaps between sums of two squares correspond to annuli in a certain area that do not contain any integer lattice points. A major objective of this paper is to investigate the sparse distribution of integer lattice points within annular regions. Examples and consequences are given. .more » « less
-
null (Ed.)Strongly consistent distributed systems are easy to reason about but face fundamental limitations in availability and performance. Weakly consistent systems can be implemented with very high performance but place a burden on the application developer to reason about complex interleavings of execution. Invariant confluence provides a formal framework for understanding when we can get the best of both worlds. An invariant confluent object can be efficiently replicated with no coordination needed to preserve its invariants. However, actually determining whether or not an object is invariant confluent is challenging. In this paper, we establish conditions under which a commonly used sufficient condition for invariant confluence is both necessary and sufficient, and we use this condition to design a general-purpose interactive invariant confluence decision procedure. We then take a step beyond invariant confluence and introduce a generalization of invariant confluence, called segmented invariant confluence, that allows us to replicate non-invariant confluent objects with a small amount of coordination. We implement these formalisms in a prototype called Lucy and find that our decision procedures efficiently handle common real-world workloads including foreign keys, escrow transactions, and more.more » « less
-
Given a function f from the set [N] to a d-dimensional integer grid, we consider data structures that allow efficient orthogonal range searching queries in the image of f, without explicitly storing it. We show that, if f is of the form [N] → [2w]d for some w = polylog(N) and is computable in constant time, then, for any 0 < α < 1, we can obtain a data structure using Õ(N1-α/3) words of space such that, for a given d-dimensional axis-aligned box B, we can search for some x ∈ [N] such that f (x) ∈ B in time Õ(Nα). This result is obtained simply by combining integer range searching with the Fiat-Naor function inversion scheme, which was already used in data-structure problems previously. We further obtain • data structures for range counting and reporting, predecessor, selection, ranking queries, and combinations thereof, on the set f ([N]), • data structures for preimage size and preimage selection queries for a given value of f, and • data structures for selection and ranking queries on geometric quantities computed from tuples of points in d-space. These results unify and generalize previously known results on 3SUM-indexing and string searching, and are widely applicable as a black box to a variety of problems. In particular, we give a data structure for a generalized version of gapped string indexing, and show how to preprocess a set of points on an integer grid in order to efficiently compute (in sublinear time), for points contained in a given axis-aligned box, their Theil-Sen estimator, the kth largest area triangle, or the induced hyperplane that is the kth furthest from the origin.more » « less
-
null (Ed.)This article studies two kinds of information extracted from statistical correlations between methods for assigning net atomic charges (NACs) in molecules. First, relative charge transfer magnitudes are quantified by performing instant least squares fitting (ILSF) on the NACs reported by Cho et al. ( ChemPhysChem , 2020, 21 , 688–696) across 26 methods applied to ∼2000 molecules. The Hirshfeld and Voronoi deformation density (VDD) methods had the smallest charge transfer magnitudes, while the quantum theory of atoms in molecules (QTAIM) method had the largest charge transfer magnitude. Methods optimized to reproduce the molecular dipole moment ( e.g. , ACP, ADCH, CM5) have smaller charge transfer magnitudes than methods optimized to reproduce the molecular electrostatic potential ( e.g. , CHELPG, HLY, MK, RESP). Several methods had charge transfer magnitudes even larger than the electrostatic potential fitting group. Second, confluence between different charge assignment methods is quantified to identify which charge assignment method produces the best NAC values for predicting via linear correlations the results of 20 charge assignment methods having a complete basis set limit across the dataset of ∼2000 molecules. The DDEC6 NACs were the best such predictor of the entire dataset. Seven confluence principles are introduced explaining why confluent quantitative descriptors offer predictive advantages for modeling a broad range of physical properties and target applications. These confluence principles can be applied in various fields of scientific inquiry. A theory is derived showing confluence is better revealed by standardized statistical analysis ( e.g. , principal components analysis of the correlation matrix and standardized reversible linear regression) than by unstandardized statistical analysis. These confluence principles were used together with other key principles and the scientific method to make assigning atom-in-material properties non-arbitrary. The N@C 60 system provides an unambiguous and non-arbitrary falsifiable test of atomic population analysis methods. The HLY, ISA, MK, and RESP methods failed for this material.more » « less
An official website of the United States government

