skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: An explore-then-commit algorithm for submodular maximization under full-bandit feedback
We investigate the problem of combinatorial multi-armed bandits with stochastic submodular (in expectation) rewards and full-bandit feedback, where no extra information other than the reward of selected action at each time step $$t$$ is observed. We propose a simple algorithm, Explore-Then-Commit Greedy (ETCG) and prove that it achieves a $(1-1/e)$-regret upper bound of $$\mathcal{O}(n^\frac{1}{3}k^\frac{4}{3}T^\frac{2}{3}\log(T)^\frac{1}{2})$$ for a horizon $$T$$, number of base elements $$n$$, and cardinality constraint $$k$$. We also show in experiments with synthetic and real-world data that the ETCG empirically outperforms other full-bandit methods.  more » « less
Award ID(s):
2149617
PAR ID:
10397867
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Cussens, James; Zhang, Kun
Date Published:
Journal Name:
Uncertainty in Artificial Intelligence
Volume:
180
ISSN:
1525-3384
Page Range / eLocation ID:
1541-1551
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We investigate the problem of unconstrained combinatorial multi-armed bandits with full-bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a $$\frac{1}{2}$$-regret upper bound of $$\Tilde{\mathcal{O}}(n T^{\frac{2}{3}})$$ for horizon $$T$$ and number of arms $$n$$. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings. 
    more » « less
  2. Feldman, Vitaly; Ligett, Katrina; Sabato, Sivan (Ed.)
    Many real-world problems like Social Influence Maximization face the dilemma of choosing the best $$K$$ out of $$N$$ options at a given time instant. This setup can be modeled as a combinatorial bandit which chooses $$K$$ out of $$N$$ arms at each time, with an aim to achieve an efficient trade-off between exploration and exploitation. This is the first work for combinatorial bandits where the feedback received can be a non-linear function of the chosen $$K$$ arms. The direct use of multi-armed bandit requires choosing among $$N$$-choose-$$K$$ options making the state space large. In this paper, we present a novel algorithm which is computationally efficient and the storage is linear in $$N$$. The proposed algorithm is a divide-and-conquer based strategy, that we call CMAB-SM. Further, the proposed algorithm achieves a \textit{regret bound} of $$\tilde O(K^{\frac{1}{2}}N^{\frac{1}{3}}T^{\frac{2}{3}})$ for a time horizon $$T$$, which is \textit{sub-linear} in all parameters $$T$$, $$N$$, and $$K$$. 
    more » « less
  3. Let $$H$$ and $$F$$ be hypergraphs. We say $$H$$ {\em contains $$F$$ as a trace} if there exists some set $$S \subseteq V(H)$$ such that $$H|_S:=\{E\cap S: E \in E(H)\}$$ contains a subhypergraph isomorphic to $$F$$. In this paper we give an upper bound on the number of edges in a $$3$$-uniform hypergraph that does not contain $$K_{2,t}$$ as a trace when $$t$$ is large. In particular, we show that $$\lim_{t\to \infty}\lim_{n\to \infty} \frac{\mathrm{ex}(n, \mathrm{Tr}_3(K_{2,t}))}{t^{3/2}n^{3/2}} = \frac{1}{6}.$$ Moreover, we show $$\frac{1}{2} n^{3/2} + o(n^{3/2}) \leqslant \mathrm{ex}(n, \mathrm{Tr}_3(C_4)) \leqslant \frac{5}{6} n^{3/2} + o(n^{3/2})$$. 
    more » « less
  4. Let $$G$$ be a $$t$$-tough graph on $$n\ge 3$$ vertices for some $t>0$. It was shown by Bauer et al. in 1995 that if the minimum degree of $$G$$ is greater than $$\frac{n}{t+1}-1$$, then $$G$$ is hamiltonian. In terms of Ore-type hamiltonicity conditions, the problem was only studied when $$t$$ is between 1 and 2, and recently the second author proved a general result. The result states that if the degree sum of any two nonadjacent vertices of $$G$$ is greater than $$\frac{2n}{t+1}+t-2$$, then $$G$$ is hamiltonian. It was conjectured in the same paper that the $+t$ in the bound $$\frac{2n}{t+1}+t-2$$ can be removed. Here we confirm the conjecture. The result generalizes the result by Bauer, Broersma, van den Heuvel, and Veldman. Furthermore, we characterize all $$t$$-tough graphs $$G$$ on $$n\ge 3$$ vertices for which $$\sigma_2(G) = \frac{2n}{t+1}-2$$ but $$G$$ is non-hamiltonian. 
    more » « less
  5. We investigate the problem of unconstrained combinatorial multi-armed bandits with fullbandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a 1 2 -regret upper bound of O˜(nT 2 3 ) for horizon T and number of arms n. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings. 
    more » « less