arXiv Analytics

Sign in

arXiv:1910.06054 [cs.LG]AbstractReferencesReviewsResources

An Optimal Algorithm for Adversarial Bandits with Arbitrary Delays

Julian Zimmert, Yevgeny Seldin

Published 2019-10-14Version 1

We propose a new algorithm for adversarial multi-armed bandits with unrestricted delays. The algorithm is based on a novel hybrid regularizer applied in the Follow the Regularized Leader (FTRL) framework. It achieves $\mathcal{O}(\sqrt{kn}+\sqrt{D\log(k)})$ regret guarantee, where $k$ is the number of arms, $n$ is the number of rounds, and $D$ is the total delay. The result matches the lower bound within constants and requires no prior knowledge of $n$ or $D$. Additionally, we propose a refined tuning of the algorithm, which achieves $\mathcal{O}(\sqrt{kn}+\min_{S}|S|+\sqrt{D_{\bar S}\log(k)})$ regret guarantee, where $S$ is a set of rounds excluded from delay counting, $\bar S = [n]\setminus S$ are the counted rounds, and $D_{\bar S}$ is the total delay in the counted rounds. If the delays are highly unbalanced, the latter regret guarantee can be significantly tighter than the former. The result requires no advance knowledge of the delays and resolves an open problem of Thune et al. (2019). The new FTRL algorithm and its refined tuning are anytime and require no doubling, which resolves another open problem of Thune et al. (2019).

Related articles: Most relevant | Search more
arXiv:1807.07623 [cs.LG] (Published 2018-07-19)
An Optimal Algorithm for Stochastic and Adversarial Bandits
arXiv:1702.06103 [cs.LG] (Published 2017-02-20)
An Improved Parametrization and Analysis of the EXP3++ Algorithm for Stochastic and Adversarial Bandits
arXiv:1811.12253 [cs.LG] (Published 2018-10-23)
Unifying the stochastic and the adversarial Bandits with Knapsack