arXiv Analytics

Sign in

arXiv:1911.07676 [stat.ML]AbstractReferencesReviewsResources

Learning with Good Feature Representations in Bandits and in RL with a Generative Model

Tor Lattimore, Csaba Szepesvari

Published 2019-11-18Version 1

The construction in the recent paper by Du et al. [2019] implies that searching for a near-optimal action in a bandit sometimes requires examining essentially all the actions, even if the learner is given linear features in $\mathbb R^d$ that approximate the rewards with a small uniform error. In this note we use the Kiefer-Wolfowitz theorem to show that by checking only a few actions, a learner can always find an action which is suboptimal with an error of at most $O(\varepsilon \sqrt{d})$ where $\varepsilon$ is the approximation error of the features. Thus, features are useful when the approximation error is small relative to the dimensionality of the features. The idea is applied to stochastic bandits and reinforcement learning with a generative model where the learner has access to $d$-dimensional linear features that approximate the action-value functions for all policies to an accuracy of $\varepsilon$. For bandits we prove a bound on the regret of order $\sqrt{dn \log(k)} + \varepsilon n \sqrt{d} \log(n)$ with $k$ the number of actions and $n$ the horizon. For RL we show that approximate policy iteration can learn a policy that is optimal up to an additive error of order $\varepsilon \sqrt{d} / (1 - \gamma)^2$ and using about $d / (\varepsilon^2(1-\gamma)^4)$ samples from the generative model.

Related articles: Most relevant | Search more
arXiv:1712.01664 [stat.ML] (Published 2017-12-05)
Learning a Generative Model for Validity in Complex Discrete Structures
arXiv:1910.10046 [stat.ML] (Published 2019-10-22)
Uncertainty Quantification with Generative Models
arXiv:1812.09771 [stat.ML] (Published 2018-12-23)
A determinantal point process for column subset selection