arXiv Analytics

Sign in

arXiv:2002.06043 [cs.LG]AbstractReferencesReviewsResources

Estimating Gradients for Discrete Random Variables by Sampling without Replacement

Wouter Kool, Herke van Hoof, Max Welling

Published 2020-02-14Version 1

We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we obtain a policy gradient estimator and we reduce its variance using a built-in control variate which is obtained without additional model evaluations. The resulting estimator is closely related to other gradient estimators. Experiments with a toy problem, a categorical Variational Auto-Encoder and a structured prediction problem show that our estimator is the only estimator that is consistently among the best estimators in both high and low entropy settings.

Related articles: Most relevant | Search more
arXiv:2002.10400 [cs.LG] (Published 2020-02-24)
Closing the convergence gap of SGD without replacement
arXiv:1611.00712 [cs.LG] (Published 2016-11-02)
The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
arXiv:1901.11311 [cs.LG] (Published 2019-01-31)
New Tricks for Estimating Gradients of Expectations