arXiv Analytics

Sign in

arXiv:1102.3508 [math.OC]AbstractReferencesReviewsResources

Online Learning of Rested and Restless Bandits

Cem Tekin, Mingyan Liu

Published 2011-02-17Version 1

In this paper we study the online learning problem involving rested and restless multiarmed bandits with multiple plays. The system consists of a single player/user and a set of K finite-state discrete-time Markov chains (arms) with unknown state spaces and statistics. At each time step the player can play M arms. The objective of the user is to decide for each step which M of the K arms to play over a sequence of trials so as to maximize its long term reward. The restless multiarmed bandit is particularly relevant to the application of opportunistic spectrum access (OSA), where a (secondary) user has access to a set of K channels, each of time-varying condition as a result of random fading and/or certain primary users' activities.

Related articles: Most relevant | Search more
arXiv:1010.0056 [math.OC] (Published 2010-10-01)
Online Learning in Opportunistic Spectrum Access: A Restless Bandit Approach
arXiv:1610.00399 [math.OC] (Published 2016-10-03)
Deadline Scheduling as Restless Bandits
arXiv:1804.02100 [math.OC] (Published 2018-04-06, updated 2019-07-23)
Restless Bandits in Action: Resource Allocation, Competition and Reservation