arXiv Analytics

Sign in

arXiv:2002.11770 [cs.CV]AbstractReferencesReviewsResources

Rethinking the Hyperparameters for Fine-tuning

Hao Li, Pratik Chaudhari, Hao Yang, Michael Lam, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

Published 2020-02-19Version 1

Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyperparameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several common practices of setting hyperparameters for fine-tuning. Our findings are based on extensive empirical evaluation for fine-tuning on various transfer learning benchmarks. (1) While prior works have thoroughly investigated learning rate and batch size, momentum for fine-tuning is a relatively unexplored parameter. We find that the value of momentum also affects fine-tuning performance and connect it with previous theoretical findings. (2) Optimal hyperparameters for fine-tuning, in particular, the effective learning rate, are not only dataset dependent but also sensitive to the similarity between the source domain and target domain. This is in contrast to hyperparameters for training from scratch. (3) Reference-based regularization that keeps models close to the initial model does not necessarily apply for "dissimilar" datasets. Our findings challenge common practices of fine-tuning and encourages deep learning practitioners to rethink the hyperparameters for fine-tuning.

Related articles: Most relevant | Search more
arXiv:2210.09943 [cs.CV] (Published 2022-10-18)
On the Importance of Architectures and Hyperparameters for Fairness in Face Recognition
arXiv:1907.12006 [cs.CV] (Published 2019-07-28)
ROAM: Recurrently Optimizing Tracking Model
arXiv:2006.11604 [cs.CV] (Published 2020-06-20)
How do SGD hyperparameters in natural training affect adversarial robustness?