arXiv Analytics

Sign in

arXiv:1902.07656 [cs.LG]AbstractReferencesReviewsResources

LOSSGRAD: automatic learning rate in gradient descent

Bartosz Wójcik, Łukasz Maziarka, Jacek Tabor

Published 2019-02-20Version 1

In this paper, we propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal step-size in gradient descent), which automatically modifies the step-size in gradient descent during neural networks training. Given a function $f$, a point $x$, and the gradient $\nabla_x f$ of $f$, we aim to find the step-size $h$ which is (locally) optimal, i.e. satisfies: $$ h=arg\,min_{t \geq 0} f(x-t \nabla_x f). $$ Making use of quadratic approximation, we show that the algorithm satisfies the above assumption. We experimentally show that our method is insensitive to the choice of initial learning rate while achieving results comparable to other methods.

Related articles: Most relevant | Search more
arXiv:1812.10004 [cs.LG] (Published 2018-12-25)
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
arXiv:2001.06472 [cs.LG] (Published 2020-01-17)
Gradient descent with momentum --- to accelerate or to super-accelerate?
arXiv:1805.00869 [cs.LG] (Published 2018-05-02)
Approximate Temporal Difference Learning is a Gradient Descent for Reversible Policies