skip to main content


Search for: All records

Award ID contains: 2106444

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In an instance of the weighted Nash Social Welfare problem, we are given a set of m indivisible items, G, and n agents, A, where each agent i in A has a valuation v_ij β‰₯ 0 for each item j in G. In addition, every agent i has a non-negative weight w_i such that the weights collectively sum up to 1. The goal is to find an assignment of items to players that maximizes the weighted geometric mean of the valuation received by the players. When all the weights are equal, the problem reduces to the classical Nash Social Welfare problem, which has recently received much attention. In this work, we present an approximation algorithm whose approximation depends on the KL-divergence between the weight distribution and the uniform distribution. We generalize the convex programming relaxations for the symmetric variant of Nash Social Welfare presented in [CDG+17, AGSS17] to two different mathematical programs. The first program is convex and is necessary for computational efficiency, while the second program is a non-convex relaxation that can be rounded efficiently. The approximation factor derives from the difference in the objective values of the convex and non-convex relaxation. 
    more » « less
    Free, publicly-accessible full text available January 7, 2025
  2. We consider the Max-3-Section problem, where we are given an undirected graph G = (V, E) equipped with non-negative edge weights w : E β†’ R+ and the goal is to find a partition of V into three equisized parts while maximizing the total weight of edges crossing between different parts. Max-3-Section is closely related to other well-studied graph partitioning problems, e.g., Max-Cut, Max-3-Cut, and Max-Bisection. We present a polynomial time algorithm achieving an approximation of 0.795, that improves upon the previous best known approximation of 0.673. The requirement of multiple parts that have equal sizes renders Max-3-Section much harder to cope with compared to, e.g., Max-Bisection. We show a new algorithm that combines the existing approach of Lassere hierarchy along with a random cut strategy that suffices to give our result. 
    more » « less
    Free, publicly-accessible full text available August 30, 2024
  3. Given a set of facilities and clients, and costs to open facilities, the classic facility location problem seeks to open a set of facilities and assign each client to one open facility to minimize the cost of opening the chosen facilities and the total distance of the clients to their assigned open facilities. Such an objective may induce an unequal cost over certain socioeconomic groups of clients (i.e., total distance traveled by clients in such a group). This is important when planning the location of socially relevant facilities such as emergency rooms. In this work, we consider a fair version of the problem where we are given π‘Ÿ clients groups that partition the set of clients, and the distance of a given group is defined as the average distance of clients in the group to their respective open facilities. The objective is to minimize the Minkowski 𝑝-norm of vector of group distances, to penalize high access costs to open facilities across π‘Ÿ groups of clients. This generalizes classic facility location (𝑝 = 1) and the minimization of the maximum group distance (𝑝 = ∞). However, in practice, fairness criteria may not be explicit or even known to a decision maker, and it is often unclear how to select a specific "𝑝" to model the cost of unfairness. To get around this, we study the notion of solution portfolios where for a fixed problem instance, we seek a small portfolio of solutions such that for any Minkowski norm 𝑝, one of these solutions is an 𝑂(1)-approximation. Using the geometric relationship between various 𝑝-norms, we show the existence of a portfolio of cardinality 𝑂(log π‘Ÿ), and a lower bound of (\sqrt{log r}). There may not be common structure across different solutions in this portfolio, which can make planning difficult if the notion of fairness changes over time or if the budget to open facilities is disbursed over time. For example, small changes in 𝑝 could lead to a completely different set of open facilities in the portfolio. Inspired by this, we introduce the notion of refinement, which is a family of solutions for each 𝑝-norm satisfying a combinatorial property. This property requires that (1) the set of facilities open for a higher 𝑝-norm must be a subset of the facilities open for a lower 𝑝-norm, and (2) all clients assigned to an open facility for a lower 𝑝-norm must be assigned to the same open facility for any higher 𝑝-norm. A refinement is 𝛼-approximate if the solution for each 𝑝-norm problem is an 𝛼-approximation for it. We show that it is sufficient to consider only 𝑂(log π‘Ÿ) norms instead of all 𝑝-norms, 𝑝 ∈ [1, ∞] to construct refinements. A natural greedy algorithm for the problem gives a poly(π‘Ÿ)-approximate refinement, which we improve to poly(r^1/\sqrt{log π‘Ÿ})-approximate using a recursive algorithm. We improve this ratio to 𝑂(log π‘Ÿ) for the special case of tree metric for uniform facility open cost. Our recursive algorithm extends to other settings, including to a hierarchical facility location problem that models facility location problems at several levels, such as public works departments and schools. A full version of this paper can be found at https://arxiv.org/abs/2211.14873. 
    more » « less
    Free, publicly-accessible full text available July 7, 2024
  4. Telikepalli Kavitha and Kurt Mehlhorn (Ed.)
    We present a very simple and intuitive algorithm to find balanced sparse cuts in a graph via shortest-paths. Our algorithm combines a new multiplicative-weights framework for solving unit-weight multi-commodity flows with standard ball growing arguments. Using Dijkstra's algorithm for computing the shortest paths afresh every time gives a very simple algorithm that runs in time Γ•(m^2/ΓΈ) and finds an Γ•(ΓΈ)-sparse balanced cut, when the given graph has a ΓΈ-sparse balanced cut. Combining our algorithm with known deterministic data-structures for answering approximate All Pairs Shortest Paths (APSP) queries under increasing edge weights (decremental setting), we obtain a simple deterministic algorithm that finds m^{o(1)}ΓΈ-sparse balanced cuts in m^{1+o(1)}/ΓΈ time. Our deterministic almost-linear time algorithm matches the state-of-the-art in randomized and deterministic settings up to subpolynomial factors, while being significantly simpler to understand and analyze, especially compared to the only almost-linear time deterministic algorithm, a recent breakthrough by Chuzhoy-Gao-Li-Nanongkai- Peng-Saranurak (FOCS 2020). 
    more » « less
  5. Determinant maximization problem gives a general framework that models problems arising in as diverse fields as statistics [Puk06], convex geometry [Kha96], fair allocations [AGSS16], combinatorics [AGV18], spectral graph theory [NST19a], network design, and random processes [KT12]. In an instance of a determinant maximization problem, we are given a collection of vectors U = {v1, . . . , vn} βŠ‚ Rd , and a goal is to pick a subset S βŠ† U of given vectors to maximize the determinant of the matrix βˆ‘i∈S vivi^T. Often, the set S of picked vectors must satisfy additional combinatorial constraints such as cardinality constraint (|S| ≀ k) or matroid constraint (S is a basis of a matroid defined on the vectors). In this paper, we give a polynomial-time deterministic algorithm that returns a r O(r)-approximation for any matroid of rank r ≀ d. This improves previous results that give e O(r^2)-approximation algorithms relying on e^O(r)-approximate estimation algorithms [NS16, AG17,AGV18, MNST20] for any r ≀ d. All previous results use convex relaxations and their relationship to stable polynomials and strongly log-concave polynomials. In contrast, our algorithm builds on combinatorial algorithms for matroid intersection, which iteratively improve any solution by finding an alternating negative cycle in the exchange graph defined by the matroids. While the det(.) function is not linear, we show that taking appropriate linear approximations at each iteration suffice to give the improved approximation algorithm. 
    more » « less
  6. We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities in m^{1+o(1)} time. Our algorithm builds the flow through a sequence of m^{1+o(1)} approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized m^{o(1)} time using a new dynamic graph data structure. Our framework extends to algorithms running in m^{1+o(1)} time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives almost-linear time algorithms for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and p-norm isotonic regression on arbitrary directed acyclic graphs. 
    more » « less