Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

In an instance of the weighted Nash Social Welfare problem, we are given a set of m indivisible items, G, and n agents, A, where each agent i in A has a valuation v_ij ≥ 0 for each item j in G. In addition, every agent i has a nonnegative weight w_i such that the weights collectively sum up to 1. The goal is to find an assignment of items to players that maximizes the weighted geometric mean of the valuation received by the players. When all the weights are equal, the problem reduces to the classical Nash Social Welfare problem, which has recently received much attention. In this work, we present an approximation algorithm whose approximation depends on the KLdivergence between the weight distribution and the uniform distribution. We generalize the convex programming relaxations for the symmetric variant of Nash Social Welfare presented in [CDG+17, AGSS17] to two different mathematical programs. The first program is convex and is necessary for computational efficiency, while the second program is a nonconvex relaxation that can be rounded efficiently. The approximation factor derives from the difference in the objective values of the convex and nonconvex relaxation.more » « lessFree, publiclyaccessible full text available January 7, 2025

We consider the Max3Section problem, where we are given an undirected graph G = (V, E) equipped with nonnegative edge weights w : E → R+ and the goal is to find a partition of V into three equisized parts while maximizing the total weight of edges crossing between different parts. Max3Section is closely related to other wellstudied graph partitioning problems, e.g., MaxCut, Max3Cut, and MaxBisection. We present a polynomial time algorithm achieving an approximation of 0.795, that improves upon the previous best known approximation of 0.673. The requirement of multiple parts that have equal sizes renders Max3Section much harder to cope with compared to, e.g., MaxBisection. We show a new algorithm that combines the existing approach of Lassere hierarchy along with a random cut strategy that suffices to give our result.more » « less

Free, publiclyaccessible full text available December 1, 2024

Determinant maximization problem gives a general framework that models problems arising in as diverse fields as statistics [Puk06], convex geometry [Kha96], fair allocations [AGSS16], combinatorics [AGV18], spectral graph theory [NST19a], network design, and random processes [KT12]. In an instance of a determinant maximization problem, we are given a collection of vectors U = {v1, . . . , vn} ⊂ Rd , and a goal is to pick a subset S ⊆ U of given vectors to maximize the determinant of the matrix ∑i∈S vivi^T. Often, the set S of picked vectors must satisfy additional combinatorial constraints such as cardinality constraint (S ≤ k) or matroid constraint (S is a basis of a matroid defined on the vectors). In this paper, we give a polynomialtime deterministic algorithm that returns a r O(r)approximation for any matroid of rank r ≤ d. This improves previous results that give e O(r^2)approximation algorithms relying on e^O(r)approximate estimation algorithms [NS16, AG17,AGV18, MNST20] for any r ≤ d. All previous results use convex relaxations and their relationship to stable polynomials and strongly logconcave polynomials. In contrast, our algorithm builds on combinatorial algorithms for matroid intersection, which iteratively improve any solution by finding an alternating negative cycle in the exchange graph defined by the matroids. While the det(.) function is not linear, we show that taking appropriate linear approximations at each iteration suffice to give the improved approximation algorithm.more » « less

It is well known that the standard greedy algorithm guarantees a worstcase approximation factor of 1 − 1/e when maximizing a monotone submodular function under a cardinality constraint. However, empirical studies show that its performance is substantially better in practice. This raises a natural question of explaining this improved performance of the greedy algorithm. In this work, we define sharpness for submodular functions as a candidate explanation for this phenomenon. We show that the greedy algorithm provably performs better as the sharpness of the submodular function increases. This improvement ties in closely with the faster convergence rates of first order methods for sharp functions in convex optimization.more » « less

null (Ed.)A major challenge in cancer genomics is to identify genes with functional roles in cancer and uncover their mechanisms of action. We introduce an integrative framework that identifies cancerrelevant genes by pinpointing those whose interaction or other functional sites are enriched in somatic mutations across tumors. We derive analytical calculations that enable us to avoid timeprohibitive permutationbased significance tests, making it computationally feasible to simultaneously consider multiple measures of protein site functionality. Our accompanying software, PertInInt, combines knowledge about sites participating in interactions with DNA, RNA, peptides, ions, or small molecules with domain, evolutionary conservation, and genelevel mutation data. When applied to 10,037 tumor samples, PertInInt uncovers both known and newly predicted cancer genes, while additionally revealing what types of interactions or other functionalities are disrupted. PertInInt’s analysis demonstrates that somatic mutations are frequently enriched in interaction sites and domains and implicates interaction perturbation as a pervasive cancerdriving event.more » « less