skip to main content


Title: Legal Assignments and Fast EADAM with Consent via Classic Theory of Stable Matchings
Gale and Shapley’s stable assignment problem has been extensively studied, applied, and extended. In the context of school choice, mechanisms often aim at finding an assignment that is more favorable to students. We investigate two extensions introduced in this framework—legal assignments and the efficiency adjusted deferred acceptance mechanism (EADAM) algorithm—through the lens of the classic theory of stable matchings. In any instance, the set [Formula: see text] of legal assignments is known to contain all stable assignments. We prove that [Formula: see text] is exactly the set of stable assignments in another instance. Moreover, we show that essentially all optimization problems over [Formula: see text] can be solved within the same time bound needed for solving it over the set of stable assignments. A key tool for this latter result is an algorithm that finds the student-optimal legal assignment. We then generalize our algorithm to obtain the assignment output of EADAM with any given set of consenting students without sacrificing the running time, hence largely improving in both theory and practice over known algorithms. Finally, we show that the set [Formula: see text] can be much larger than the set of stable matchings, connecting legal matchings with certain concepts and open problems in the literature.  more » « less
Award ID(s):
2046146
NSF-PAR ID:
10317637
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Operations Research
ISSN:
0030-364X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Singh, M. ; Williamson, D. (Ed.)
    Birkhoff’s representation theorem defines a bijection between elements of a distributive lattice L and the family of upper sets of an associated poset B. When elements of L are the stable matchings in an instance of Gale and Shapley’s marriage model, Irving et al. showed how to use B to devise a combinatorial algorithm for maximizing a linear function over the set of stable matchings. In this paper, we introduce a general property of distributive lattices, which we term as affine representability, and show its role in efficiently solving linear optimization problems over the elements of a distributive lattice, as well as describing the convex hull of the characteristic vectors of lattice elements. We apply this concept to the stable matching model with path-independent quotafilling choice functions, thus giving efficient algorithms and a compact polyhedral description for this model. To the best of our knowledge, this model generalizes all models from the literature for which similar results were known, and our paper is the first that proposes efficient algorithms for stable matchings with choice functions, beyond extension of the Deferred Acceptance algorithm. 
    more » « less
  2. Fix a weakly minimal (i.e. superstable [Formula: see text]-rank [Formula: see text]) structure [Formula: see text]. Let [Formula: see text] be an expansion by constants for an elementary substructure, and let [Formula: see text] be an arbitrary subset of the universe [Formula: see text]. We show that all formulas in the expansion [Formula: see text] are equivalent to bounded formulas, and so [Formula: see text] is stable (or NIP) if and only if the [Formula: see text]-induced structure [Formula: see text] on [Formula: see text] is stable (or NIP). We then restrict to the case that [Formula: see text] is a pure abelian group with a weakly minimal theory, and [Formula: see text] is mutually algebraic (equivalently, weakly minimal with trivial forking). This setting encompasses most of the recent research on stable expansions of [Formula: see text]. Using various characterizations of mutual algebraicity, we give new examples of stable structures of the form [Formula: see text]. Most notably, we show that if [Formula: see text] is a weakly minimal additive subgroup of the algebraic numbers, [Formula: see text] is enumerated by a homogeneous linear recurrence relation with algebraic coefficients, and no repeated root of the characteristic polynomial of [Formula: see text] is a root of unity, then [Formula: see text] is superstable for any [Formula: see text]. 
    more » « less
  3. We consider the problem of enumerating optimal solutions for two hypergraph k-partitioning problems, namely, Hypergraph-k-Cut and Minmax-Hypergraph-k-Partition. The input in hypergraph k-partitioning problems is a hypergraph [Formula: see text] with positive hyperedge costs along with a fixed positive integer k. The goal is to find a partition of V into k nonempty parts [Formula: see text]—known as a k-partition—so as to minimize an objective of interest. (1) If the objective of interest is the maximum cut value of the parts, then the problem is known as Minmax-Hypergraph-k-Partition. A subset of hyperedges is a minmax-k-cut-set if it is the subset of hyperedges crossing an optimum k-partition for Minmax-Hypergraph-k-Partition. (2) If the objective of interest is the total cost of hyperedges crossing the k-partition, then the problem is known as Hypergraph-k-Cut. A subset of hyperedges is a min-k-cut-set if it is the subset of hyperedges crossing an optimum k-partition for Hypergraph-k-Cut. We give the first polynomial bound on the number of minmax-k-cut-sets and a polynomial-time algorithm to enumerate all of them in hypergraphs for every fixed k. Our technique is strong enough to also enable an [Formula: see text]-time deterministic algorithm to enumerate all min-k-cut-sets in hypergraphs, thus improving on the previously known [Formula: see text]-time deterministic algorithm, in which n is the number of vertices and p is the size of the hypergraph. The correctness analysis of our enumeration approach relies on a structural result that is a strong and unifying generalization of known structural results for Hypergraph-k-Cut and Minmax-Hypergraph-k-Partition. We believe that our structural result is likely to be of independent interest in the theory of hypergraphs (and graphs).

    Funding: All authors were supported by NSF AF 1814613 and 1907937.

     
    more » « less
  4. We study optimal design problems in which the goal is to choose a set of linear measurements to obtain the most accurate estimate of an unknown vector. We study the [Formula: see text]-optimal design variant where the objective is to minimize the average variance of the error in the maximum likelihood estimate of the vector being measured. We introduce the proportional volume sampling algorithm to obtain nearly optimal bounds in the asymptotic regime when the number [Formula: see text] of measurements made is significantly larger than the dimension [Formula: see text] and obtain the first approximation algorithms whose approximation factor does not degrade with the number of possible measurements when [Formula: see text] is small. The algorithm also gives approximation guarantees for other optimal design objectives such as [Formula: see text]-optimality and the generalized ratio objective, matching or improving the previously best-known results. We further show that bounds similar to ours cannot be obtained for [Formula: see text]-optimal design and that [Formula: see text]-optimal design is NP-hard to approximate within a fixed constant when [Formula: see text]. 
    more » « less
  5. The information bottleneck (IB) approach to clustering takes a joint distribution [Formula: see text] and maps the data [Formula: see text] to cluster labels [Formula: see text], which retain maximal information about [Formula: see text] (Tishby, Pereira, & Bialek, 1999 ). This objective results in an algorithm that clusters data points based on the similarity of their conditional distributions [Formula: see text]. This is in contrast to classic geometric clustering algorithms such as [Formula: see text]-means and gaussian mixture models (GMMs), which take a set of observed data points [Formula: see text] and cluster them based on their geometric (typically Euclidean) distance from one another. Here, we show how to use the deterministic information bottleneck (DIB) (Strouse & Schwab, 2017 ), a variant of IB, to perform geometric clustering by choosing cluster labels that preserve information about data point location on a smoothed data set. We also introduce a novel intuitive method to choose the number of clusters via kinks in the information curve. We apply this approach to a variety of simple clustering problems, showing that DIB with our model selection procedure recovers the generative cluster labels. We also show that, in particular limits of our model parameters, clustering with DIB and IB is equivalent to [Formula: see text]-means and EM fitting of a GMM with hard and soft assignments, respectively. Thus, clustering with (D)IB generalizes and provides an information-theoretic perspective on these classic algorithms. 
    more » « less