arXiv Analytics

Sign in

arXiv:1706.06569 [cs.LG]AbstractReferencesReviewsResources

A Unified Approach to Adaptive Regularization in Online and Stochastic Optimization

Vineet Gupta, Tomer Koren, Yoram Singer

Published 2017-06-20Version 1

We describe a framework for deriving and analyzing online optimization algorithms that incorporate adaptive, data-dependent regularization, also termed preconditioning. Such algorithms have been proven useful in stochastic optimization by reshaping the gradients according to the geometry of the data. Our framework captures and unifies much of the existing literature on adaptive online methods, including the AdaGrad and Online Newton Step algorithms as well as their diagonal versions. As a result, we obtain new convergence proofs for these algorithms that are substantially simpler than previous analyses. Our framework also exposes the rationale for the different preconditioned updates used in common stochastic optimization methods.

Related articles: Most relevant | Search more
arXiv:1908.05474 [cs.LG] (Published 2019-08-15)
Adaptive Regularization of Labels
arXiv:2303.13113 [cs.LG] (Published 2023-03-23)
Adaptive Regularization for Class-Incremental Learning
arXiv:2310.12244 [cs.LG] (Published 2023-10-18)
A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm