arXiv:1209.2693 [cs.LG]AbstractReferencesReviewsResources
Regret Bounds for Restless Markov Bandits
Ronald Ortner, Daniil Ryabko, Peter Auer, Rémi Munos
Published 2012-09-12Version 1
We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm that after $T$ steps achieves $\tilde{O}(\sqrt{T})$ regret with respect to the best policy that knows the distributions of all arms. No assumptions on the Markov chains are made except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem.
Comments: In proceedings of The 23rd International Conference on Algorithmic Learning Theory (ALT 2012)
Journal: Proceedings of ALT, Lyon, France, LNCS 7568, pp.214-228, 2012
Tags: conference paper, journal article
Related articles: Most relevant | Search more
arXiv:2406.04766 [cs.LG] (Published 2024-06-07)
Reinforcement Learning and Regret Bounds for Admission Control
arXiv:2406.16802 [cs.LG] (Published 2024-06-24)
Improved Regret Bounds for Bandits with Expert Advice
arXiv:2210.05789 [cs.LG] (Published 2022-10-11)
Trading Off Resource Budgets for Improved Regret Bounds