arXiv Analytics

Sign in

arXiv:2502.08993 [stat.ML]AbstractReferencesReviewsResources

Off-Policy Evaluation for Recommendations with Missing-Not-At-Random Rewards

Tatsuki Takahashi, Chihiro Maru, Hiroko Shoji

Published 2025-02-13Version 1

Unbiased recommender learning (URL) and off-policy evaluation/learning (OPE/L) techniques are effective in addressing the data bias caused by display position and logging policies, thereby consistently improving the performance of recommendations. However, when both bias exits in the logged data, these estimators may suffer from significant bias. In this study, we first analyze the position bias of the OPE estimator when rewards are missing not at random. To mitigate both biases, we propose a novel estimator that leverages two probabilities of logging policies and reward observations as propensity scores. Our experiments demonstrate that the proposed estimator achieves superior performance compared to other estimators, even as the levels of bias in reward observations increases.

Related articles: Most relevant | Search more
arXiv:2006.06982 [stat.ML] (Published 2020-06-12)
Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales
arXiv:2112.09865 [stat.ML] (Published 2021-12-18, updated 2024-08-18)
Off-Policy Evaluation Using Information Borrowing and Context-Based Switching
arXiv:2212.06355 [stat.ML] (Published 2022-12-13)
A Review of Off-Policy Evaluation in Reinforcement Learning