arXiv Analytics

Sign in

arXiv:1902.03633 [cs.LG]AbstractReferencesReviewsResources

Diverse Exploration via Conjugate Policies for Policy Gradient Methods

Andrew Cohen, Xingye Qiao, Lei Yu, Elliot Way, Xiangrong Tong

Published 2019-02-10Version 1

We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations.

Related articles: Most relevant | Search more
arXiv:1908.03263 [cs.LG] (Published 2019-08-08)
Trajectory-wise Control Variates for Variance Reduction in Policy Gradient Methods
arXiv:1912.05104 [cs.LG] (Published 2019-12-11)
Entropy Regularization with Discounted Future State Distribution in Policy Gradient Methods
arXiv:1810.02525 [cs.LG] (Published 2018-10-05)
Where Did My Optimum Go?: An Empirical Analysis of Gradient Descent Optimization in Policy Gradient Methods