arXiv Analytics

Sign in

arXiv:1608.03023 [cs.LG]AbstractReferencesReviewsResources

Stochastic Rank-1 Bandits

Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, Claire Vernade, Zheng Wen

Published 2016-08-10Version 1

We propose stochastic rank-$1$ bandits, a class of online learning problems where at each step a learning agent chooses a pair of row and column arms, and receives the product of their payoffs as a reward. The main challenge of the problem is that the learning agent does not observe the payoffs of the individual arms, only their product. The payoffs of the row and column arms are stochastic, and independent of each other. We propose a computationally-efficient algorithm for solving our problem, Rank1Elim, and derive a $O((K + L) (1 / \Delta) \log n)$ upper bound on its $n$-step regret, where $K$ is the number of rows, $L$ is the number of columns, and $\Delta$ is the minimum gap in the row and column payoffs. To the best of our knowledge, this is the first bandit algorithm for stochastic rank-$1$ matrix factorization whose regret is linear in $K + L$, $1 / \Delta$, and $\log n$. We evaluate Rank1Elim on a synthetic problem and show that its regret scales as suggested by our upper bound. We also compare it to UCB1, and show significant improvements as $K$ and $L$ increase.

Related articles: Most relevant | Search more
arXiv:1912.00650 [cs.LG] (Published 2019-12-02)
Stochastic Variational Inference via Upper Bound
arXiv:1501.07320 [cs.LG] (Published 2015-01-29)
Tensor Factorization via Matrix Factorization
arXiv:2310.12688 [cs.LG] (Published 2023-10-19)
Compression of Recurrent Neural Networks using Matrix Factorization