skip to main content


Title: Multi-Agent Maximization of a Monotone Submodular Function via Maximum Consensus
This paper studies distributed submodular optimization subject to partition matroid. We work in the value oracle model where the only access of the agents to the utility function is through a black box that returns the utility function value. The agents are communicating over a connected undirected graph and have access only to their own strategy set. As known in the literature, submodular maximization subject to matroid constraints is NP-hard. Hence, our objective is to propose a polynomial-time distributed algorithm to obtain a suboptimal solution with guarantees on the optimality bound. Our proposed algorithm is based on a distributed stochastic gradient ascent scheme built on the multilinear-extension of the submodular set function. We use a maximum consensus protocol to minimize the inconsistency of the shared information over the network caused by delay in the flow of information while solving for the fractional solution of the multilinear extension model. Furthermore, we propose a distributed framework of finding a set solution using the fractional solution. We show that our distributed algorithm results in a strategy set that when the team objective function is evaluated at worst case the objective function value is in 1−1/e−O(1/T) of the optimal solution in the value oracle model where T is the number of communication instances of the agents. An example demonstrates our results.  more » « less
Award ID(s):
1724331
NSF-PAR ID:
10328432
Author(s) / Creator(s):
;
Date Published:
Journal Name:
IEEE Conference on Decision and Control
Page Range / eLocation ID:
1238 to 1243
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a $1-1/e-\epsilon$ approximation for monotone functions and a $1/e-\epsilon$ approximation for non-monotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is $O(\log^2{n}/\epsilon^3)$, which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for non-monotone submodular maximization subject to packing constraints. Our algorithm achieves a $1/e-\epsilon$ approximation using $O(\log(n/\epsilon) \log(1/\epsilon) \log(n+m)/ \epsilon^2)$ parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a $1-1/e-\epsilon$ approximation in $O(\log(n/\epsilon)\log(m)/\epsilon^2)$ parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective (Mahoney et al., 2016). Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function. 
    more » « less
  2. We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, $1 - 1/e - \epsilon$ approximation, using $(1/\epsilon)^{O(1/\epsilon^4)} n \log^2{n}$ function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to $\Omega(n^2)$ running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle. 
    more » « less
  3. We consider the problem of maximizing a monotone submodular function subject to a knapsack constraint. Our main contribution is an algorithm that achieves a nearly-optimal, 1 - 1/e - epsilon approximation, using (1/epsilon)^{O(1/epsilon^4)} n log^2{n} function evaluations and arithmetic operations. Our algorithm is impractical but theoretically interesting, since it overcomes a fundamental running time bottleneck of the multilinear extension relaxation framework. This is the main approach for obtaining nearly-optimal approximation guarantees for important classes of constraints but it leads to Omega(n^2) running times, since evaluating the multilinear extension is expensive. Our algorithm maintains a fractional solution with only a constant number of entries that are strictly fractional, which allows us to overcome this obstacle. 
    more » « less
  4. Consider deploying a team of robots in order to visit sites in a risky environment (i.e., where a robot might be lost during a traversal), subject to team-based operational constraints such as limits on team composition, traffic throughputs, and launch constraints. We formalize this problem using a graph to represent the environment, enforcing probabilistic survival constraints for each robot, and using a matroid (which generalizes linear independence to sets) to capture the team-based operational constraints. The resulting “Matroid Team Surviving Orienteers” (MTSO) problem has broad applications for robotics such as informative path planning, resource delivery, and search and rescue. We demonstrate that the objective for the MTSO problem has submodular structure, which leads us to develop two polynomial time algorithms which are guaranteed to find a solution with value within a constant factor of the optimum. The second of our algorithms is an extension of the accelerated continuous greedy algorithm, and can be applied to much broader classes of constraints while maintaining bounds on suboptimality. In addition to in-depth analysis, we demonstrate the efficiency of our approaches by applying them to a scenario where a team of robots must gather information while avoiding dangers in the Coral Triangle and characterize scaling and parameter selection using a synthetic dataset.

     
    more » « less
  5. Banerjee, Arindam ; Fukumizu, Kenji (Ed.)
    Numerous tasks in machine learning and artificial intelligence have been modeled as submodular maximization problems. These problems usually involve sensitive data about individuals, and in addition to maximizing the utility, privacy concerns should be considered. In this paper, we study the general framework of non-negative monotone submodular maximization subject to matroid or knapsack constraints in both offline and online settings. For the offline setting, we propose a differentially private $(1-\frac{\kappa}{e})$-approximation algorithm, where $\kappa\in[0,1]$ is the total curvature of the submodular set function, which improves upon prior works in terms of approximation guarantee and query complexity under the same privacy budget. In the online setting, we propose the first differentially private algorithm, and we specify the conditions under which the regret bound scales as $Ø(\sqrt{T})$, i.e., privacy could be ensured while maintaining the same regret bound as the optimal regret guarantee in the non-private setting. 
    more » « less