{ "id": "2106.10575", "version": "v1", "published": "2021-06-19T21:51:39.000Z", "updated": "2021-06-19T21:51:39.000Z", "title": "EvoGrad: Efficient Gradient-Based Meta-Learning and Hyperparameter Optimization", "authors": [ "Ondrej Bohdal", "Yongxin Yang", "Timothy Hospedales" ], "categories": [ "cs.LG", "cs.NE", "stat.ML" ], "abstract": "Gradient-based meta-learning and hyperparameter optimization have seen significant progress recently, enabling practical end-to-end training of neural networks together with many hyperparameters. Nevertheless, existing approaches are relatively expensive as they need to compute second-order derivatives and store a longer computational graph. This cost prevents scaling them to larger network architectures. We present EvoGrad, a new approach to meta-learning that draws upon evolutionary techniques to more efficiently compute hypergradients. EvoGrad estimates hypergradient with respect to hyperparameters without calculating second-order gradients, or storing a longer computational graph, leading to significant improvements in efficiency. We evaluate EvoGrad on two substantial recent meta-learning applications, namely cross-domain few-shot learning with feature-wise transformations and noisy label learning with MetaWeightNet. The results show that EvoGrad significantly improves efficiency and enables scaling meta-learning to bigger CNN architectures such as from ResNet18 to ResNet34.", "revisions": [ { "version": "v1", "updated": "2021-06-19T21:51:39.000Z" } ], "analyses": { "keywords": [ "hyperparameter optimization", "efficient gradient-based meta-learning", "longer computational graph", "larger network architectures", "evograd estimates hypergradient" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }