arXiv Analytics

Sign in

arXiv:2004.13106 [cs.LG]AbstractReferencesReviewsResources

Learning to Rank in the Position Based Model with Bandit Feedback

Beyza Ermis, Patrick Ernst, Yannik Stein, Giovanni Zappella

Published 2020-04-27Version 1

Personalization is a crucial aspect of many online experiences. In particular, content ranking is often a key component in delivering sophisticated personalization results. Commonly, supervised learning-to-rank methods are applied, which suffer from bias introduced during data collection by production systems in charge of producing the ranking. To compensate for this problem, we leverage contextual multi-armed bandits. We propose novel extensions of two well-known algorithms viz. LinUCB and Linear Thompson Sampling to the ranking use-case. To account for the biases in a production environment, we employ the position-based click model. Finally, we show the validity of the proposed algorithms by conducting extensive offline experiments on synthetic datasets as well as customer facing online A/B experiments.

Related articles: Most relevant | Search more
arXiv:1202.3079 [cs.LG] (Published 2012-02-14)
Towards minimax policies for online linear optimization with bandit feedback
arXiv:2106.05165 [cs.LG] (Published 2021-06-09)
A Lyapunov-Based Methodology for Constrained Optimization with Bandit Feedback
arXiv:2008.05523 [cs.LG] (Published 2020-08-12)
Non-Stochastic Control with Bandit Feedback