skip to main content


Search for: All records

Creators/Authors contains: "Aggarwal, Vaneet"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available December 30, 2024
  2. Free, publicly-accessible full text available December 13, 2024
  3. Free, publicly-accessible full text available December 1, 2024
  4. Free, publicly-accessible full text available August 1, 2024
  5. Free, publicly-accessible full text available July 3, 2024
  6. Free, publicly-accessible full text available July 10, 2024
  7. We investigate the problem of unconstrained combinatorial multi-armed bandits with full-bandit feedback and stochastic rewards for submodular maximization. Previous works investigate the same problem assuming a submodular and monotone reward function. In this work, we study a more general problem, i.e., when the reward function is not necessarily monotone, and the submodularity is assumed only in expectation. We propose Randomized Greedy Learning (RGL) algorithm and theoretically prove that it achieves a $\frac{1}{2}$-regret upper bound of $\Tilde{\mathcal{O}}(n T^{\frac{2}{3}})$ for horizon $T$ and number of arms $n$. We also show in experiments that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings. 
    more » « less
  8. Free, publicly-accessible full text available May 8, 2024