arXiv Analytics

Sign in

arXiv:2308.12000 [stat.ML]AbstractReferencesReviewsResources

On Uniformly Optimal Algorithms for Best Arm Identification in Two-Armed Bandits with Fixed Budget

Po-An Wang, Kaito Ariu, Alexandre Proutiere

Published 2023-08-23Version 1

We study the problem of best-arm identification with fixed budget in stochastic two-arm bandits with Bernoulli rewards. We prove that surprisingly, there is no algorithm that (i) performs as well as the algorithm sampling each arm equally (this algorithm is referred to as the {\it uniform sampling} algorithm) on all instances, and that (ii) strictly outperforms this algorithm on at least one instance. In short, there is no algorithm better than the uniform sampling algorithm. Towards this result, we introduce the natural class of {\it consistent} and {\it stable} algorithms, and show that any algorithm that performs as well as the uniform sampling algorithm on all instances belongs to this class. The proof is completed by deriving a lower bound on the error rate satisfied by any consistent and stable algorithm, and by showing that the uniform sampling algorithm matches this lower bound. Our results provide a solution to the two open problems presented in \cite{qin2022open}.

Related articles: Most relevant | Search more
arXiv:1710.06360 [stat.ML] (Published 2017-10-17)
Good Arm Identification via Bandit Feedback
arXiv:1407.4443 [stat.ML] (Published 2014-07-16, updated 2016-11-14)
On the Complexity of Best Arm Identification in Multi-Armed Bandit Models
arXiv:2301.03785 [stat.ML] (Published 2023-01-10)
Best Arm Identification in Stochastic Bandits: Beyond $β-$optimality