We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a $$1-1/e-\epsilon$$ approximation for monotone functions and a $$1/e-\epsilon$$ approximation for non-monotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is $$O(\log^2{n}/\epsilon^3)$$, which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for non-monotone submodular maximization subject to packing constraints. Our algorithm achieves a $$1/e-\epsilon$$ approximation using $$O(\log(n/\epsilon) \log(1/\epsilon) \log(n+m)/ \epsilon^2)$$ parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a $$1-1/e-\epsilon$$ approximation in $$O(\log(n/\epsilon)\log(m)/\epsilon^2)$$ parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective (Mahoney et al., 2016). Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function.
more »
« less
The Power of Adaptivity for Stochastic Submodular Cover
Adaptivity in Stochastic Submodular Cover Solutions to stochastic optimization problems are typically sequential decision processes that make decisions one by one, waiting for (and using) the feedback from each decision. Whereas such “adaptive” solutions achieve the best objective, they can be very time-consuming because of the need to wait for feedback after each decision. A natural question is are there solutions that only adapt (i.e., wait for feedback) a few times whereas still being competitive with the fully adaptive optimal solution? In “The Power of Adaptivity for Stochastic Submodular Cover,” Ghuge, Gupta, and Nagarajan resolve this question in the context of stochastic submodular cover, which is a fundamental stochastic covering problem. They provide algorithms that achieve a smooth trade-off between the number of adaptive “rounds” and the solution quality. The authors also demonstrate via experiments on real-world and synthetic data sets that, even for problems with more than 1,000 decisions, about six rounds of adaptivity suffice to obtain solutions nearly as good as fully adaptive solutions.
more »
« less
- PAR ID:
- 10521347
- Publisher / Repository:
- INFORMS
- Date Published:
- Journal Name:
- Operations Research
- Volume:
- 72
- Issue:
- 3
- ISSN:
- 0030-364X
- Page Range / eLocation ID:
- 1156 to 1176
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Adjustable robust optimization (ARO) involves recourse decisions (i.e. reactive actions after the realization of the uncertainty, ‘wait-and-see’) as functions of the uncertainty, typically posed in a two-stage stochastic setting. Solving the general ARO problems is challenging, therefore ways to reduce the computational effort have been proposed, with the most popular being the affine decision rules, where ‘wait-and-see’ decisions are approximated as affine adjustments of the uncertainty. In this work we propose a novel method for the derivation of generalized affine decision rules for linear mixed-integer ARO problems through multi-parametric programming, that lead to the exact and global solution of the ARO problem. The problem is treated as a multi-level programming problem and it is then solved using a novel algorithm for the exact and global solution of multi-level mixed-integer linear programming problems. The main idea behind the proposed approach is to solve the lower optimization level of the ARO problem parametrically, by considering ‘here-and-now’ variables and uncertainties as parameters. This will result in a set of affine decision rules for the ‘wait-and-see’ variables as a function of ‘here-and-now’ variables and uncertainties for their entire feasible space. A set of illustrative numerical examples are provided to demonstrate the potential of the proposed novel approach.more » « less
-
Learning problems commonly exhibit an interesting feedback mechanism wherein the population data reacts to competing decision makers’ actions. This paper formulates a new game theoretic framework for this phenomenon, called multi-player performative prediction. We focus on two distinct solution concepts, namely (i) performatively stable equilibria and (ii) Nash equilibria of the game. The latter equilibria are arguably more informative, but are generally computationally difficult to find since they are solutions of nonmonotone games. We show that under mild assumptions, the performatively stable equilibria can be found efficiently by a variety of algorithms, including repeated retraining and the repeated (stochastic) gradient method. We then establish transparent sufficient conditions for strong monotonicity of the game and use them to develop algorithms for finding Nash equilibria. We investigate derivative free methods and adaptive gradient algorithms wherein each player alternates between learning a parametric description of their distribution and gradient steps on the empirical risk. Synthetic and semi-synthetic numerical experiments illustrate the results.more » « less
-
In this paper, we study the tradeoff between the approximation guarantee and adaptivity for the problem of maximizing a monotone submodular function subject to a cardinality constraint. The adaptivity of an algorithm is the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially-many parallel queries. Adaptivity is an important consideration in settings where the objective function is estimated using samples and in applications where adaptivity is the main running time bottleneck. Previous algorithms achieving a nearly-optimal $$1 - 1/e - \epsilon$$ approximation require $$\Omega(n)$$ rounds of adaptivity. In this work, we give the first algorithm that achieves a $$1 - 1/e - \epsilon$$ approximation using $$O(\ln{n} / \epsilon^2)$$ rounds of adaptivity. The number of function evaluations and additional running time of the algorithm are $$O(n \; \mathrm{poly}(\log{n}, 1/\epsilon))$$.more » « less
-
From out-competing grandmasters in chess to informing high-stakes healthcare decisions, emerging methods from artificial intelligence are increasingly capable of making complex and strategic decisions in diverse, high-dimensional and uncertain situations. But can these methods help us devise robust strategies for managing environmental systems under great uncertainty? Here we explore how reinforcement learning (RL), a subfield of artificial intelligence, approaches decision problems through a lens similar to adaptive environmental management: learning through experience to gradually improve decisions with updated knowledge. We review where RL holds promise for improving evidence-informed adaptive management decisions even when classical optimization methods are intractable and discuss technical and social issues that arise when applying RL to adaptive management problems in the environmental domain. Our synthesis suggests that environmental management and computer science can learn from one another about the practices, promises and perils of experience-based decision-making. This article is part of the theme issue ‘Detecting and attributing the causes of biodiversity change: needs, gaps and solutions’.more » « less