arXiv Analytics

Sign in

arXiv:1205.2874 [cs.LG]AbstractReferencesReviewsResources

Decoupling Exploration and Exploitation in Multi-Armed Bandits

Orly Avner, Shie Mannor, Ohad Shamir

Published 2012-05-13, updated 2012-06-30Version 3

We consider a multi-armed bandit problem where the decision maker can explore and exploit different arms at every round. The exploited arm adds to the decision maker's cumulative reward (without necessarily observing the reward) while the explored arm reveals its value. We devise algorithms for this setup and show that the dependence on the number of arms, k, can be much better than the standard square root of k dependence, depending on the behavior of the arms' reward sequences. For the important case of piecewise stationary stochastic bandits, we show a significant improvement over existing algorithms. Our algorithms are based on a non-uniform sampling policy, which we show is essential to the success of any algorithm in the adversarial setup. Finally, we show some simulation results on an ultra-wide band channel selection inspired setting indicating the applicability of our algorithms.

Comments: Full version of the paper presented at ICML 2012
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:1802.05054 [cs.LG] (Published 2018-02-14)
GEP-PG: Decoupling Exploration and Exploitation in Deep Reinforcement Learning Algorithms
arXiv:2205.13566 [cs.LG] (Published 2022-05-26)
Exploration, Exploitation, and Engagement in Multi-Armed Bandits with Abandonment
arXiv:2112.06517 [cs.LG] (Published 2021-12-13, updated 2022-03-23)
Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations