arXiv Analytics

Sign in

arXiv:1311.0800 [cs.LG]AbstractReferencesReviewsResources

Distributed Exploration in Multi-Armed Bandits

Eshcar Hillel, Zohar Karnin, Tomer Koren, Ronny Lempel, Oren Somekh

Published 2013-11-04Version 1

We study exploration in Multi-Armed Bandits in a setting where $k$ players collaborate in order to identify an $\epsilon$-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the $k$ players to communicate only once, they are able to learn $\sqrt{k}$ times faster than a single player. That is, distributing learning to $k$ players gives rise to a factor $\sqrt{k}$ parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor $k$ speed-up in learning performance, with communication only logarithmic in $1/\epsilon$.

Related articles: Most relevant | Search more
arXiv:1904.03293 [cs.LG] (Published 2019-04-05)
Collaborative Learning with Limited Interaction: Tight Bounds for Distributed Exploration in Multi-Armed Bandits
arXiv:1904.07272 [cs.LG] (Published 2019-04-15)
Introduction to Multi-Armed Bandits
arXiv:2402.01845 [cs.LG] (Published 2024-02-02, updated 2024-07-15)
Multi-Armed Bandits with Interference