arXiv Analytics

Sign in

arXiv:2103.13059 [stat.ML]AbstractReferencesReviewsResources

Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information

Wei Huang, Richard Combes, Cindy Trinh

Published 2021-03-24Version 1

We propose a novel algorithm for multi-player multi-armed bandits without collision sensing information. Our algorithm circumvents two problems shared by all state-of-the-art algorithms: it does not need as an input a lower bound on the minimal expected reward of an arm, and its performance does not scale inversely proportionally to the minimal expected reward. We prove a theoretical regret upper bound to justify these claims. We complement our theoretical results with numerical experiments, showing that the proposed algorithm outperforms state-of-the-art in practice as well.

Related articles: Most relevant | Search more
arXiv:2211.16275 [stat.ML] (Published 2022-11-29)
A survey on multi-player bandits
arXiv:2302.06025 [stat.ML] (Published 2023-02-12)
Beyond UCB: Statistical Complexity and Optimal Algorithms for Non-linear Ridge Bandits
arXiv:2502.09047 [stat.ML] (Published 2025-02-13)
Optimal Algorithms in Linear Regression under Covariate Shift: On the Importance of Precondition