arXiv Analytics

Sign in

arXiv:1911.05142 [cs.LG]AbstractReferencesReviewsResources

Incentivized Exploration for Multi-Armed Bandits under Reward Drift

Zhiyuan Liu, Huazheng Wang, Fan Shen, Kai Liu, Lijun Chen

Published 2019-11-12Version 1

We study incentivized exploration for the multi-armed bandit (MAB) problem where the players receive compensation for exploring arms other than the greedy choice and may provide biased feedback on reward. We seek to understand the impact of this drifted reward feedback by analyzing the performance of three instantiations of the incentivized MAB algorithm: UCB, $\varepsilon$-Greedy, and Thompson Sampling. Our results show that they all achieve $\mathcal{O}(\log T)$ regret and compensation under the drifted reward, and are therefore effective in incentivizing exploration. Numerical examples are provided to complement the theoretical analysis.

Comments: 10 pages, 2 figures, AAAI 2020
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2307.07264 [cs.LG] (Published 2023-07-14)
On Interpolating Experts and Multi-Armed Bandits
arXiv:2211.06883 [cs.LG] (Published 2022-11-13)
Generalizing distribution of partial rewards for multi-armed bandits with temporally-partitioned rewards
arXiv:1911.09458 [cs.LG] (Published 2019-11-21)
Observe Before Play: Multi-armed Bandit with Pre-observations