%APacchiano, Aldo%AGhavamzadeh, Mohammad%ABartlett, Peter%AJiang, Heinrich%ABanerjee, Arindam Ed.%AFukumizu, Kenji Ed.%BJournal Name: Proceedings of The 24th International Conference on Artificial Intelligence and Statistics; Journal Volume: 130
%D2021%I
%JJournal Name: Proceedings of The 24th International Conference on Artificial Intelligence and Statistics; Journal Volume: 130
%K
%MOSTI ID: 10273286
%PMedium: X; Size: 2827-2835
%TStochastic Bandits with Linear Constraints
%XWe study a constrained contextual linear bandit setting, where the goal of the agent is to produce a sequence of policies, whose expected cumulative reward over the course of multiple rounds is maximum, and each one of them has an expected cost below a certain threshold. We propose an upper-confidence bound algorithm for this problem, called optimistic pessimistic linear bandit (OPLB), and prove a sublinear bound on its regret that is inversely proportional to the difference between the constraint threshold and the cost of a known feasible action. Our algorithm balances exploration and constraint satisfaction using a novel idea that scales the radii of the reward and cost confidence sets with different scaling factors. We further specialize our results to multi-armed bandits and propose a computationally efficient algorithm for this setting and prove a a regret bound that is better than simply casting multi-armed bandits as an instance of linear bandits and using the regret bound of OPLB. We also prove a lower-bound for the problem studied in the paper and provide simulations to validate our theoretical results. Finally, we show how our algorithm and analysis can be extended to multiple constraints and to the case when the cost of the feasible action is unknown.
%0Journal Article
Country unknown/Code not availableOSTI-MSA