arXiv Analytics

Sign in

arXiv:2002.11642 [stat.ML]AbstractReferencesReviewsResources

Off-Policy Evaluation and Learning for External Validity under a Covariate Shift

Masahiro Kato, Masatoshi Uehara, Shota Yasui

Published 2020-02-26Version 1

We consider the evaluation and training of a new policy for the evaluation data by using the historical data obtained from a different policy. The goal of off-policy evaluation (OPE) is to estimate the expected reward of a new policy over the evaluation data, and that of off-policy learning (OPL) is to find a new policy that maximizes the expected reward over the evaluation data. Although the standard OPE and OPL assume the same distribution of covariate between the historical and evaluation data, there often exists a problem of a covariate shift, i.e., the distribution of the covariate of the historical data is different from that of the evaluation data. In this paper, we derive the efficiency bound of OPE under a covariate shift. Then, we propose doubly robust and efficient estimators for OPE and OPL under a covariate shift by using an estimator of the density ratio between the distributions of the historical and evaluation data. We also discuss other possible estimators and compare their theoretical properties. Finally, we confirm the effectiveness of the proposed estimators through experiments.

Related articles: Most relevant | Search more
arXiv:2006.06982 [stat.ML] (Published 2020-06-12)
Confidence Interval for Off-Policy Evaluation from Dependent Samples via Bandit Algorithm: Approach from Standardized Martingales
arXiv:2406.00317 [stat.ML] (Published 2024-06-01)
Combining Experimental and Historical Data for Policy Evaluation
arXiv:2112.09865 [stat.ML] (Published 2021-12-18, updated 2024-08-18)
Off-Policy Evaluation Using Information Borrowing and Context-Based Switching