skip to main content


Search for: All records

Award ID contains: 2114269

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this paper, we generalize the recently studied stochastic matching problem to more accurately model a significant medical process, kidney exchange, and several other applications. Up until now the stochastic matching problem that has been studied was as follows: given a graph G= (V,E), each edge is included in the realized sub-graph of G independently with probability pe, and the goal is to find a degree-bounded sub-graph Q of G that has an expected maximum matching that approximates the expected maximum matching of G. This model does not account for possibilities of vertex dropouts, which can be found in several applications, e.g. in kidney exchange when donors or patients opt out of the exchange process as well as in online freelancing and online dating when online profiles are found to be faked. Thus, we will study a more generalized model of stochastic matching in which vertices and edges are both realized independently with some probabilities pv, pe, respectively, which more accurately fits important applications than the previously studied model. We will discuss the first algorithms and analysis for this generalization of the stochastic matching model and prove that they achieve good approximation ratios. In particular, we show that the approximation factor of a natural algorithm for this problem is at least 0.6568 in unweighted graphs, and 1/2+ε in weighted graphs for some constant ε >0. We further improve our result for unweighted graphs to 2/3 using edge degree constrained sub-graphs (EDCS). 
    more » « less
  2. In the Colonel Blotto game, which was initially introduced by Borel in 1921, two colonels simultaneously distribute their troops across different battlefields. The winner of each battlefield is determined independently by a winner-takes-all rule. The ultimate payoff for each colonel is the number of battlefields won. The Colonel Blotto game is commonly used for analyzing a wide range of applications from the U.S. Presidential election to innovative technology competitions to advertising, sports, and politics. There are persistent efforts to find the optimal strategies for the Colonel Blotto game. However, the first polynomial-time algorithm for that has very recently been provided by Ahmadinejad, Dehghani, Hajiaghayi, Lucier, Mahini, and Seddighin. Their algorithm consists of an exponential size linear program (LP), which they solve using the ellipsoid method. Because of the use of the ellipsoid method, despite its significant theoretical importance, this algorithm is highly impractical. In general, even the simplex method (despite its exponential running time in practice) performs better than the ellipsoid method in practice. In this paper, we provide the first polynomial-size LP formulation of the optimal strategies for the Colonel Blotto game using linear extension techniques. Roughly speaking, we consider the natural representation of the strategy space polytope and transform it to a higher dimensional strategy space, which interestingly has exponentially fewer facets. In other words, we add a few variables to the LP such that, surprisingly, the number of constraints drops down to a polynomial. We use this polynomial-size LP to provide a simpler and significantly faster algorithm for finding optimal strategies of the Colonel Blotto game. We further show this representation is asymptotically tight, which means there exists no other linear representation of the strategy space with fewer constraints. We also extend our approach to multidimensional Colonel Blotto games, in which players may have different sorts of budgets, such as money, time, human resources, etc. By implementing this algorithm, we are able to run tests that were previously impossible to solve in a reasonable time. This information allows us to observe some interesting properties of Colonel Blotto; for example, we find out the behavior of players in the discrete model is very similar to the continuous model Roberson solved. 
    more » « less
  3. null (Ed.)
    We study the problem of fair allocation for indivisible goods. We use the maximin share paradigm introduced by Budish [Budish E (2011) The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. J. Political Econom. 119(6):1061–1103.] as a measure of fairness. Kurokawa et al. [Kurokawa D, Procaccia AD, Wang J (2018) Fair enough: Guaranteeing approximate maximin shares. J. ACM 65(2):8.] were the first to investigate this fundamental problem in the additive setting. They showed that in delicately constructed examples, not everyone can obtain a utility of at least her maximin value. They mitigated this impossibility result with a beautiful observation: no matter how the utility functions are made, we always can allocate the items to the agents to guarantee each agent’s utility is at least 2/3 of her maximin value. They left open whether this bound can be improved. Our main contribution answers this question in the affirmative. We improve their approximation result to a 3/4 factor guarantee. 
    more » « less
  4. null (Ed.)
    The edit distance between two strings is defined as the smallest number of insertions , deletions , and substitutions that need to be made to transform one of the strings to another one. Approximating edit distance in subquadratic time is “one of the biggest unsolved problems in the field of combinatorial pattern matching” [37]. Our main result is a quantum constant approximation algorithm for computing the edit distance in truly subquadratic time. More precisely, we give an quantum algorithm that approximates the edit distance within a factor of 3. We further extend this result to an quantum algorithm that approximates the edit distance within a larger constant factor. Our solutions are based on a framework for approximating edit distance in parallel settings. This framework requires as black box an algorithm that computes the distances of several smaller strings all at once. For a quantum algorithm, we reduce the black box to metric estimation and provide efficient algorithms for approximating it. We further show that this framework enables us to approximate edit distance in distributed settings. To this end, we provide a MapReduce algorithm to approximate edit distance within a factor of , with sublinearly many machines and sublinear memory. Also, our algorithm runs in a logarithmic number of rounds. 
    more » « less