In an instance of the weighted Nash Social Welfare problem, we are given a set of m indivisible items, G, and n agents, A, where each agent i in A has a valuation v_ij ≥ 0 for each item j in G. In addition, every agent i has a non-negative weight w_i such that the weights collectively sum up to 1. The goal is to find an assignment of items to players that maximizes the weighted geometric mean of the valuation received by the players. When all the weights are equal, the problem reduces to the classical Nash Social Welfare problem, which has recently received much attention. In this work, we present an approximation algorithm whose approximation depends on the KL-divergence between the weight distribution and the uniform distribution. We generalize the convex programming relaxations for the symmetric variant of Nash Social Welfare presented in [CDG+17, AGSS17] to two different mathematical programs. The first program is convex and is necessary for computational efficiency, while the second program is a non-convex relaxation that can be rounded efficiently. The approximation factor derives from the difference in the objective values of the convex and non-convex relaxation.
more »
« less
Approximation Strategies for Incomplete MaxSAT
Incomplete MaxSAT solving aims to quickly find a solution that attempts to minimize the sum of the weights of the unsatisfied soft clauses without providing any optimality guarantees. In this paper, we propose two approximation strategies for improving incomplete MaxSAT solving. In one of the strategies, we cluster the weights and approximate them with a representative weight. In another strategy, we break up the problem of minimizing the sum of weights of unsatisfiable clauses into multiple minimization subproblems. Experimental results show that approximation strategies can be used to find better solutions than the best incomplete solvers in the MaxSAT Evaluation 2017
more »
« less
- Award ID(s):
- 1762363
- PAR ID:
- 10095675
- Date Published:
- Journal Name:
- International Conference on Principles and Practice of Constraint Programming
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Adding blocked clauses to a CNF formula can substantially speed up SAT-solving, both in theory and practice. In theory, the addition of blocked clauses can exponentially reduce the length of the shortest refutation for a formula [17, 19]. In practice, it has been recently shown that the runtime of CDCL solvers decreases significantly for certain instance families when blocked clauses are added as a preprocessing step [10,22]. This fact is in contrast to, but not in contradiction with, prior results showing that Blocked- Clause Elimination (BCE) is sometimes an effective preprocessing step [14,15]. We suggest that the practical role of blocked clauses in SAT-solving might be richer than expected. Concretely, we propose a theoretical study of the complexity of Blocked-Clause Addition (BCA) as a preprocessing step for SAT-solving, and in particular, consider the problem of adding the maximum number of blocked clauses of a given arity k to an input formula F. While BCE is a confluent process, meaning that the order in which blocked clauses are eliminated is irrelevant, this is not the case for BCA: adding a blocked clause to a formula might unblock a different clause that was previously blocked. This order-sensitivity turns out to be a crucial obstacle for carrying out BCA efficiently as a preprocessing step. Our main result is that computing the maximum number of k-ary blocked clauses that can be added to an input formula F is NP-hard for every k ≥ 2.more » « less
-
Bun, Mark (Ed.)We introduce and study the problem of balanced districting, where given an undirected graph with vertices carrying two types of weights (different population, resource types, etc) the goal is to maximize the total weights covered in vertex disjoint districts such that each district is a star or (in general) a connected induced subgraph with the two weights to be balanced. This problem is strongly motivated by political redistricting, where contiguity, population balance, and compactness are essential. We provide hardness and approximation algorithms for this problem. In particular, we show NP-hardness for an approximation better than n^{1/2-δ} for any constant δ > 0 in general graphs even when the districts are star graphs, as well as NP-hardness on complete graphs, tree graphs, planar graphs and other restricted settings. On the other hand, we develop an algorithm for balanced star districting that gives an O(√n)-approximation on any graph (which is basically tight considering matching hardness of approximation results), an O(log n) approximation on planar graphs with extensions to minor-free graphs. Our algorithm uses a modified Whack-a-Mole algorithm [Bhattacharya, Kiss, and Saranurak, SODA 2023] to find a sparse solution of a fractional packing linear program (despite exponentially many variables) which requires a new design of a separation oracle specific for our balanced districting problem. To turn the fractional solution to a feasible integer solution, we adopt the randomized rounding algorithm by [Chan and Har-Peled, SoCG 2009]. To get a good approximation ratio of the rounding procedure, a crucial element in the analysis is the balanced scattering separators for planar graphs and minor-free graphs - separators that can be partitioned into a small number of k-hop independent sets for some constant k - which may find independent interest in solving other packing style problems. Further, our algorithm is versatile - the very same algorithm can be analyzed in different ways on various graph classes, which leads to class-dependent approximation ratios. We also provide a FPTAS algorithm for complete graphs and tree graphs, as well as greedy algorithms and approximation ratios when the district cardinality is bounded, the graph has bounded degree or the weights are binary. We refer the readers to the full version of the paper for complete set of results and proofs.more » « less
-
The notion of comparison between system runs is fundamental in formal verification. This concept is implicitly present in the verification of qualitative systems, and is more pronounced in the verification of quantitative systems. In this work, we identify a novel mode of comparison in quantitative systems: the online comparison of the aggregate values of two sequences of quantitative weights. This notion is embodied by comparator automata (comparators, in short), a new class of automata that read two infinite sequences of weights synchronously and relate their aggregate values. Weshowthat aggregate functions that can be represented with B¨uchi automaton result in comparators that are finite-state and accept by the B¨uchi condition as well. Such ω-regular comparators further lead to generic algorithms for a number of well-studied problems, including the quantitative inclusion and winning strategies in quantitative graph games with incomplete information, as well as related non-decision problems, such as obtaining a f inite representation of all counterexamples in the quantitative inclusion problem. We study comparators for two aggregate functions: discounted-sum and limit-average. We prove that the discounted-sum comparator is ω-regular iff the discount-factor is an integer. Not every aggregate function, however, has an ω-regular comparator. Specifically, we show that the language of sequence-pairs for which limit-average aggregates exist is neither ω-regular nor ω-context-free. Given this result, we introduce the notion of prefixaverage as a relaxation of limit-average aggregation, and show that it admits ω-context-free comparators i.e. comparator automata expressed by B¨uchi pushdown automata.more » « less
-
We describe a paradigm for designing parallel algorithms via approximation, and illustrate it on the b-edgecover problem. A b-edgecover of minimum weight in a graph is a subset $$C$$ of its edges such that at least a specified number $b(v)$ of edges in $$C$$ is incident on each vertex $$v$$, and the sum of the edge weights in $$C$$ is minimum. The Greedy algorithm and a variant, the LSE algorithm, provide $3/2$-approximation guarantees in the worst-case for this problem, but these algorithms have limited parallelism. Hence we design two new $$2$$-approximation algorithms with greater concurrency. The MCE algorithm reduces the computation of a b-edgecover to that of finding a b'-matching, by exploiting the relationship between these subgraphs in an approximation context. The LSE-NW is derived from the LSEalgorithm using static edge weights rather than dynamically computing effective edge weights. This relaxation gives LSE a worse approximation guarantee but makes it more amenable to parallelization. We prove that both the MCE and LSE-NW algorithms compute the same b-edgecover with at most twice the weight of the minimum weight edge cover. In practice, the $$2$$-approximation and $3/2$-approximation algorithms compute edge covers of weight within $$10\%$$ the optimal. We implement three of the approximation algorithms, MCE, LSE, and LSE-NW on shared memory multi-core machines, including an Intel Xeon and an IBM Power8 machine with 8 TB memory. The MCE algorithm is the fastest of these by an order of magnitude or more. It computes an edge cover in a graph with billions of edges in $20$ seconds using two hundred threads on the IBM Power8. We also show that the parallel depth and work can be bounded for the Suitor and b-Suitor algorithms when edge weights are random.more » « less
An official website of the United States government

