arXiv Analytics

Sign in

arXiv:2502.07529 [cs.LG]AbstractReferencesReviewsResources

Training Deep Learning Models with Norm-Constrained LMOs

Thomas Pethick, Wanyun Xie, Kimon Antonakopoulos, Zhenyu Zhu, Antonio Silveti-Falls, Volkan Cevher

Published 2025-02-11Version 1

In this work, we study optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball. We propose a new stochastic family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems. The resulting update rule unifies several existing optimization methods under a single framework. Furthermore, we propose an explicit choice of norm for deep architectures, which, as a side benefit, leads to the transferability of hyperparameters across model sizes. Experimentally, we demonstrate significant speedups on nanoGPT training without any reliance on Adam. The proposed method is memory-efficient, requiring only one set of model weights and one set of gradients, which can be stored in half-precision.

Related articles: Most relevant | Search more
arXiv:1906.04278 [cs.LG] (Published 2019-06-10)
Performance Analysis and Characterization of Training Deep Learning Models on NVIDIA TX2
arXiv:1912.06761 [cs.LG] (Published 2019-12-14)
Training Deep Learning models with small datasets
arXiv:2306.08323 [cs.LG] (Published 2023-06-14)
How to estimate carbon footprint when training deep learning models? A guide and review