We investigate the problem of unconstrained combinatorial multi-armed bandits with full-bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a $$\frac{1}{2}$$-regret upper bound of $$\Tilde{\mathcal{O}}(n T^{\frac{2}{3}})$$ for horizon $$T$$ and number of arms $$n$$. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.
more »
« less
An explore-then-commit algorithm for submodular maximization under full-bandit feedback
We investigate the problem of combinatorial multi-armed bandits with stochastic submodular (in expectation) rewards and full-bandit feedback, where no extra information other than the reward of selected action at each time step $$t$$ is observed. We propose a simple algorithm, Explore-Then-Commit Greedy (ETCG) and prove that it achieves a $(1-1/e)$-regret upper bound of $$\mathcal{O}(n^\frac{1}{3}k^\frac{4}{3}T^\frac{2}{3}\log(T)^\frac{1}{2})$$ for a horizon $$T$$, number of base elements $$n$$, and cardinality constraint $$k$$. We also show in experiments with synthetic and real-world data that the ETCG empirically outperforms other full-bandit methods.
more »
« less
- Award ID(s):
- 2149617
- PAR ID:
- 10397867
- Editor(s):
- Cussens, James; Zhang, Kun
- Date Published:
- Journal Name:
- Uncertainty in Artificial Intelligence
- Volume:
- 180
- ISSN:
- 1525-3384
- Page Range / eLocation ID:
- 1541-1551
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Feldman, Vitaly; Ligett, Katrina; Sabato, Sivan (Ed.)Many real-world problems like Social Influence Maximization face the dilemma of choosing the best $$K$$ out of $$N$$ options at a given time instant. This setup can be modeled as a combinatorial bandit which chooses $$K$$ out of $$N$$ arms at each time, with an aim to achieve an efficient trade-off between exploration and exploitation. This is the first work for combinatorial bandits where the feedback received can be a non-linear function of the chosen $$K$$ arms. The direct use of multi-armed bandit requires choosing among $$N$$-choose-$$K$$ options making the state space large. In this paper, we present a novel algorithm which is computationally efficient and the storage is linear in $$N$$. The proposed algorithm is a divide-and-conquer based strategy, that we call CMAB-SM. Further, the proposed algorithm achieves a \textit{regret bound} of $$\tilde O(K^{\frac{1}{2}}N^{\frac{1}{3}}T^{\frac{2}{3}})$ for a time horizon $$T$$, which is \textit{sub-linear} in all parameters $$T$$, $$N$$, and $$K$$.more » « less
-
We propose a novel combinatorial stochastic-greedy bandit (SGB) algorithm for combinatorial multi-armed bandit problems when no extra information other than the joint reward of the selected set of n arms at each time step t in [T] is observed. SGB adopts an optimized stochastic-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms. Unlike existing methods that explore the entire set of unselected base arms during each selection step, our SGB algorithm samples only an optimized proportion of unselected arms and selects actions from this subset. We prove that our algorithm achieves a (1-1/e)-regret bound of O(n^(1/3) k^(2/3) T^(2/3) log(T)^(2/3)) for monotone stochastic submodular rewards, which outperforms the state-of-the-art in terms of the cardinality constraint k. Furthermore, we empirically evaluate the performance of our algorithm in the context of online constrained social influence maximization. Our results demonstrate that our proposed approach consistently outperforms the other algorithms, increasing the performance gap as k grows.more » « less
-
Let $$H$$ and $$F$$ be hypergraphs. We say $$H$$ {\em contains $$F$$ as a trace} if there exists some set $$S \subseteq V(H)$$ such that $$H|_S:=\{E\cap S: E \in E(H)\}$$ contains a subhypergraph isomorphic to $$F$$. In this paper we give an upper bound on the number of edges in a $$3$$-uniform hypergraph that does not contain $$K_{2,t}$$ as a trace when $$t$$ is large. In particular, we show that $$\lim_{t\to \infty}\lim_{n\to \infty} \frac{\mathrm{ex}(n, \mathrm{Tr}_3(K_{2,t}))}{t^{3/2}n^{3/2}} = \frac{1}{6}.$$ Moreover, we show $$\frac{1}{2} n^{3/2} + o(n^{3/2}) \leqslant \mathrm{ex}(n, \mathrm{Tr}_3(C_4)) \leqslant \frac{5}{6} n^{3/2} + o(n^{3/2})$$.more » « less
-
In this paper, we consider the problem of online monotone DR-submodular maximization subject to long-term stochastic constraints. Specifically, at each round $$t\in [T]$$, after committing an action $$\bx_t$$, a random reward $$f_t(\bx_t)$$ and an unbiased gradient estimate of the point $$\widetilde{\nabla}f_t(\bx_t)$$ (semi-bandit feedback) are revealed. Meanwhile, a budget of $$g_t(\bx_t)$$, which is linear and stochastic, is consumed of its total allotted budget $$B_T$$. We propose a gradient ascent based algorithm that achieves $$\frac{1}{2}$$-regret of $$\mathcal{O}(\sqrt{T})$$ with $$\mathcal{O}(T^{3/4})$$ constraint violation with high probability. Moreover, when first-order full-information feedback is available, we propose an algorithm that achieves $(1-1/e)$-regret of $$\mathcal{O}(\sqrt{T})$$ with $$\mathcal{O}(T^{3/4})$$ constraint violation. These algorithms significantly improve over the state-of-the-art in terms of query complexity.more » « less
An official website of the United States government

