Gale and Shapley’s stable assignment problem has been extensively studied, applied, and extended. In the context of school choice, mechanisms often aim at finding an assignment that is more favorable to students. We investigate two extensions introduced in this framework—legal assignments and the efficiency adjusted deferred acceptance mechanism (EADAM) algorithm—through the lens of the classic theory of stable matchings. In any instance, the set [Formula: see text] of legal assignments is known to contain all stable assignments. We prove that [Formula: see text] is exactly the set of stable assignments in another instance. Moreover, we show that essentially all optimization problems over [Formula: see text] can be solved within the same time bound needed for solving it over the set of stable assignments. A key tool for this latter result is an algorithm that finds the student-optimal legal assignment. We then generalize our algorithm to obtain the assignment output of EADAM with any given set of consenting students without sacrificing the running time, hence largely improving in both theory and practice over known algorithms. Finally, we show that the set [Formula: see text] can be much larger than the set of stable matchings, connecting legal matchings with certain concepts and open problems in the literature.
more »
« less
Affinely representable lattices, stable matchings, and choice functions
Birkhoff’s representation theorem defines a bijection between elements of a distributive lattice L and the family of upper sets of an associated poset B. When elements of L are the stable matchings in an instance of Gale and Shapley’s marriage model, Irving et al.
showed how to use B to devise a combinatorial algorithm for maximizing a linear function over the set of stable matchings. In this paper, we introduce a general property of distributive lattices, which we term as affine representability, and show its role in efficiently solving linear optimization problems over the elements of a distributive lattice, as well as describing the convex hull of the characteristic vectors of lattice elements. We apply
this concept to the stable matching model with path-independent quotafilling choice functions, thus giving efficient algorithms and a compact polyhedral description for this model. To the best of our knowledge, this model generalizes all models from the literature for which similar results were known, and our paper is the first that proposes efficient
algorithms for stable matchings with choice functions, beyond extension of the Deferred Acceptance algorithm.
more »
« less
- Award ID(s):
- 2046146
- NSF-PAR ID:
- 10317632
- Editor(s):
- Singh, M.; Williamson, D.
- Date Published:
- Journal Name:
- Lecture notes in computer science
- ISSN:
- 0302-9743
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While the stable marriage problem and its variants model a vast range of matching markets, they fail to capture complex agent relationships, such as the affiliation of applicants and employers in an interview marketplace. To model this problem, the existing literature on matching with externalities permits agents to provide complete and total rankings over matchings based off of both their own and their affiliates' matches. This complete ordering restriction is unrealistic, and further the model may have an empty core. To address this, we introduce the Dichotomous Affiliate Stable Matching (DASM) Problem, where agents' preferences indicate dichotomous acceptance or rejection of another agent in the marketplace, both for themselves and their affiliates. We also assume the agent's preferences over entire matchings are determined by a general weighted valuation function of their (and their affiliates') matches. Our results are threefold: (1) we use a human study to show that real-world matching rankings follow our assumed valuation function; (2) we prove that there always exists a stable solution by providing an efficient, easily-implementable algorithm that finds such a solution; and (3) we experimentally validate the efficiency of our algorithm versus a linear-programming-based approach.more » « less
-
Cormode, Graham ; Shekelyan, Michael (Ed.)Datalog^∘ is an extension of Datalog, where instead of a program being a collection of union of conjunctive queries over the standard Boolean semiring, a program may now be a collection of sum-product queries over an arbitrary commutative partially ordered pre-semiring. Datalog^∘ is more powerful than Datalog in that its additional algebraic structure alows for supporting recursion with aggregation. At the same time, Datalog^∘ retains the syntactic and semantic simplicity of Datalog: Datalog^∘ has declarative least fixpoint semantics. The least fixpoint can be found via the naïve evaluation algorithm that repeatedly applies the immediate consequence operator until no further change is possible. It was shown in [Mahmoud Abo Khamis et al., 2022] that, when the underlying semiring is p-stable, then the naïve evaluation of any Datalog^∘ program over the semiring converges in a finite number of steps. However, the upper bounds on the rate of convergence were exponential in the number n of ground IDB atoms. This paper establishes polynomial upper bounds on the convergence rate of the naïve algorithm on linear Datalog^∘ programs, which is quite common in practice. In particular, the main result of this paper is that the convergence rate of linear Datalog^∘ programs under any p-stable semiring is O(pn³). Furthermore, we show a matching lower bound by constructing a p-stable semiring and a linear Datalog^∘ program that requires Ω(pn³) iterations for the naïve iteration algorithm to converge. Next, we study the convergence rate in terms of the number of elements in the semiring for linear Datalog^∘ programs. When L is the number of elements, the convergence rate is bounded by O(pn log L). This significantly improves the convergence rate for small L. We show a nearly matching lower bound as well.more » « less
-
Over the last two decades, a significant line of work in theoretical algorithms has made progress in solving linear systems of the form $\mathbf{L}\mathbf{x} = \mathbf{b}$, where $\mathbf{L}$ is the Laplacian matrix of a weighted graph with weights $w(i,j)>0$ on the edges. The solution $\mathbf{x}$ of the linear system can be interpreted as the potentials of an electrical flow in which the resistance on edge $(i,j)$ is $1/w(i,j)$. Kelner, Orrechia, Sidford, and Zhu \cite{KOSZ13} give a combinatorial, near-linear time algorithm that maintains the Kirchoff Current Law, and gradually enforces the Kirchoff Potential Law by updating flows around cycles ({\it cycle toggling}). In this paper, we consider a dual version of the algorithm that maintains the Kirchoff Potential Law, and gradually enforces the Kirchoff Current Law by {\it cut toggling}: each iteration updates all potentials on one side of a fundamental cut of a spanning tree by the same amount. We prove that this dual algorithm also runs in a near-linear number of iterations. We show, however, that if we abstract cut toggling as a natural data structure problem, this problem can be reduced to the online vector-matrix-vector problem (OMv), which has been conjectured to be difficult for dynamic algorithms \cite{HKNS15}. The conjecture implies that the data structure does not have an $O(n^{1-\epsilon})$ time algorithm for any $\epsilon > 0$, and thus a straightforward implementation of the cut-toggling algorithm requires essentially linear time per iteration. To circumvent the lower bound, we batch update steps, and perform them simultaneously instead of sequentially. An appropriate choice of batching leads to an $\widetilde{O}(m^{1.5})$ time cut-toggling algorithm for solving Laplacian systems. Furthermore, we show that if we sparsify the graph and call our algorithm recursively on the Laplacian system implied by batching and sparsifying, we can reduce the running time to $O(m^{1 + \epsilon})$ for any $\epsilon > 0$. Thus, the dual cut-toggling algorithm can achieve (almost) the same running time as its primal cycle-toggling counterpart.more » « less
-
Berry, Jonathan ; Shmoys, David ; Cowen, Lenore ; Naumann, Uwe (Ed.)Continuous DR-submodular functions are a class of functions that satisfy the Diminishing Returns (DR) property, which implies that they are concave along non-negative directions. Existing works have studied monotone continuous DR-submodular maximization subject to a convex constraint and have proposed efficient algorithms with approximation guarantees. However, in many applications, e. g., computing the stability number of a graph and mean-field inference for probabilistic log-submodular models, the DR-submodular function has the additional property of being strongly concave along non-negative directions that could be utilized for obtaining faster convergence rates. In this paper, we first introduce and characterize the class of strongly DR-submodular functions and show how such a property implies strong concavity along non-negative directions. Then, we study L-smooth monotone strongly DR-submodular functions that have bounded curvature, and we show how to exploit such additional structure to obtain algorithms with improved approximation guarantees and faster convergence rates for the maximization problem. In particular, we propose the SDRFW algorithm that matches the provably optimal approximation ratio after only iterations, where c ∈ [0,1] and μ ≥ 0 are the curvature and the strong DR-submodularity parameter. Furthermore, we study the Projected Gradient Ascent (PGA) method for this problem and provide a refined analysis of the algorithm with an improved approximation ratio (compared to ½ in prior works) and a linear convergence rate. Given that both algorithms require knowledge of the smoothness parameter L, we provide a novel characterization of L for DR-submodular functions showing that in many cases, computing L could be formulated as a convex optimization problem, i. e., a geometric program, that could be solved efficiently. Experimental results illustrate and validate the efficiency and effectiveness of our algorithms.more » « less