arXiv Analytics

Sign in

arXiv:1906.03559 [stat.ML]AbstractReferencesReviewsResources

The Implicit Bias of AdaGrad on Separable Data

Qian Qian, Xiaoyuan Qian

Published 2019-06-09Version 1

We study the implicit bias of AdaGrad on separable linear classification problems. We show that AdaGrad converges to a direction that can be characterized as the solution of a quadratic optimization problem with the same feasible set as the hard SVM problem. We also give a discussion about how different choices of the hyperparameters of AdaGrad might impact this direction. This provides a deeper understanding of why adaptive methods do not seem to have the generalization ability as good as gradient descent does in practice.

Related articles: Most relevant | Search more
arXiv:2406.10650 [stat.ML] (Published 2024-06-15)
The Implicit Bias of Adam on Separable Data
arXiv:1803.01905 [stat.ML] (Published 2018-03-05)
Convergence of Gradient Descent on Separable Data
arXiv:2302.09376 [stat.ML] (Published 2023-02-18)
Parameter Averaging for SGD Stabilizes the Implicit Bias towards Flat Regions