arXiv Analytics

Sign in

arXiv:1203.4523 [cs.LG]AbstractReferencesReviewsResources

On the Equivalence between Herding and Conditional Gradient Algorithms

Francis Bach, Simon Lacoste-Julien, Guillaume Obozinski

Published 2012-03-20, updated 2012-09-11Version 2

We show that the herding procedure of Welling (2009) takes exactly the form of a standard convex optimization algorithm--namely a conditional gradient algorithm minimizing a quadratic moment discrepancy. This link enables us to invoke convergence results from convex optimization and to consider faster alternatives for the task of approximating integrals in a reproducing kernel Hilbert space. We study the behavior of the different variants through numerical simulations. The experiments indicate that while we can improve over herding on the task of approximating integrals, the original herding algorithm tends to approach more often the maximum entropy distribution, shedding more light on the learning bias behind herding.

Journal: ICML 2012 International Conference on Machine Learning, Edimburgh : Royaume-Uni (2012)
Categories: cs.LG, math.OC, stat.ML
Related articles: Most relevant | Search more
arXiv:2002.04753 [cs.LG] (Published 2020-02-12)
A Random-Feature Based Newton Method for Empirical Risk Minimization in Reproducing Kernel Hilbert Space
arXiv:2002.11187 [cs.LG] (Published 2020-02-25)
Reliable Estimation of Kullback-Leibler Divergence by Controlling Discriminator Complexity in the Reproducing Kernel Hilbert Space
arXiv:2111.03469 [cs.LG] (Published 2021-11-05, updated 2022-03-28)
Perturbational Complexity by Distribution Mismatch: A Systematic Analysis of Reinforcement Learning in Reproducing Kernel Hilbert Space