arXiv Analytics

Sign in

arXiv:1704.03926 [cs.LG]AbstractReferencesReviewsResources

Value Directed Exploration in Multi-Armed Bandits with Structured Priors

Bence Cserna, Marek Petrik, Reazul Hasan Russel, Wheeler Ruml

Published 2017-04-12Version 1

Multi-armed bandits are a quintessential machine learning problem requiring the balancing of exploration and exploitation. While there has been progress in developing algorithms with strong theoretical guarantees, there has been less focus on practical near-optimal finite-time performance. In this paper, we propose an algorithm for Bayesian multi-armed bandits that utilizes value-function-driven online planning techniques. Building on previous work on UCB and Gittins index, we introduce linearly-separable value functions that take both the expected return and the benefit of exploration into consideration to perform n-step lookahead. The algorithm enjoys a sub-linear performance guarantee and we present simulation results that confirm its strength in problems with structured priors. The simplicity and generality of our approach makes it a strong candidate for analyzing more complex multi-armed bandit problems.

Related articles: Most relevant | Search more
arXiv:1206.6852 [cs.LG] (Published 2012-06-27)
Structured Priors for Structure Learning
arXiv:2307.07264 [cs.LG] (Published 2023-07-14)
On Interpolating Experts and Multi-Armed Bandits
arXiv:2211.06883 [cs.LG] (Published 2022-11-13)
Generalizing distribution of partial rewards for multi-armed bandits with temporally-partitioned rewards