arXiv Analytics

Sign in

arXiv:1511.02540 [math.OC]AbstractReferencesReviewsResources

Speed learning on the fly

Pierre-Yves Massé, Yann Ollivier

Published 2015-11-08Version 1

The practical performance of online stochastic gradient descent algorithms is highly dependent on the chosen step size, which must be tediously hand-tuned in many applications. The same is true for more advanced variants of stochastic gradients, such as SAGA, SVRG, or AdaGrad. Here we propose to adapt the step size by performing a gradient descent on the step size itself, viewing the whole performance of the learning trajectory as a function of step size. Importantly, this adaptation can be computed online at little cost, without having to iterate backward passes over the full data.

Related articles: Most relevant | Search more
arXiv:1204.3034 [math.OC] (Published 2012-04-13, updated 2012-06-18)
Lower Bounds on the Performance of Analog to Digital Converters
arXiv:2210.04757 [math.OC] (Published 2022-10-10)
On the Performance of Gradient Tracking with Local Updates
arXiv:1711.09407 [math.OC] (Published 2017-11-26)
A note on using performance and data profilesfor training algorithms