arXiv Analytics

Sign in

arXiv:2106.10575 [cs.LG]AbstractReferencesReviewsResources

EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization

Ondrej Bohdal, Yongxin Yang, Timothy Hospedales

Published 2021-06-19Version 1

Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters. Nevertheless, existing approaches are relatively expensive as they need to compute second-order derivatives and store a longer computational graph. This cost prevents scaling them to larger network architectures. We present EvoGrad, a new approach to meta-learning that draws upon evolutionary techniques to more efficiently compute hypergradients. EvoGrad estimates hypergradient with respect to hyperparameters without calculating second-order gradients, or storing a longer computational graph, leading to significant improvements in efficiency. We evaluate EvoGrad on two substantial recent meta-learning applications, namely cross-domain few-shot learning with feature-wise transformations and noisy label learning with MetaWeightNet. The results show that EvoGrad significantly improves efficiency and enables scaling meta-learning to bigger CNN architectures such as from ResNet18 to ResNet34.

Related articles: Most relevant | Search more
arXiv:2109.03670 [cs.LG] (Published 2021-09-08)
YAHPO Gym -- Design Criteria and a new Multifidelity Benchmark for Hyperparameter Optimization
arXiv:2506.19540 [cs.LG] (Published 2025-06-24)
Overtuning in Hyperparameter Optimization
arXiv:2410.10417 [cs.LG] (Published 2024-10-14)
A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning