arXiv Analytics

Sign in

arXiv:1910.02757 [stat.ML]AbstractReferencesReviewsResources

Stochastic Bandits with Delay-Dependent Payoffs

Leonardo Cella, Nicolò Cesa-Bianchi

Published 2019-10-07Version 1

Motivated by recommendation problems in music streaming platforms, we propose a nonstationary stochastic bandit model in which the expected reward of an arm depends on the number of rounds that have passed since the arm was last pulled. After proving that finding an optimal policy is NP-hard even when all model parameters are known, we introduce a class of ranking policies provably approximating, to within a constant factor, the expected reward of the optimal policy. We show an algorithm whose regret with respect to the best ranking policy is bounded by $\widetilde{\scO}\big(\!\sqrt{kT}\big)$, where $k$ is the number of arms and $T$ is time. Our algorithm uses only $\scO\big(k\ln\ln T)$ switches, which helps when switching between policies is costly. As constructing the class of learning policies requires ordering the arms according to their expectations, we also bound the number of pulls required to do so. Finally, we run experiments to compare our algorithm against UCB on different problem instances.

Related articles: Most relevant | Search more
arXiv:1911.01483 [stat.ML] (Published 2019-11-04)
Statistical Inference for Model Parameters in Stochastic Gradient Descent via Batch Means
arXiv:2405.18601 [stat.ML] (Published 2024-05-28)
From Conformal Predictions to Confidence Regions
arXiv:2105.02344 [stat.ML] (Published 2021-05-05)
Policy Learning with Adaptively Collected Data