arXiv Analytics

Sign in

arXiv:2001.01431 [cs.LG]AbstractReferencesReviewsResources

Deeper Insights into Weight Sharing in Neural Architecture Search

Yuge Zhang, Zejun Lin, Junyang Jiang, Quanlu Zhang, Yujing Wang, Hui Xue, Chen Zhang, Yaming Yang

Published 2020-01-06Version 1

With the success of deep neural networks, Neural Architecture Search (NAS) as a way of automatic model design has attracted wide attention. As training every child model from scratch is very time-consuming, recent works leverage weight-sharing to speed up the model evaluation procedure. These approaches greatly reduce computation by maintaining a single copy of weights on the super-net and share the weights among every child model. However, weight-sharing has no theoretical guarantee and its impact has not been well studied before. In this paper, we conduct comprehensive experiments to reveal the impact of weight-sharing: (1) The best-performing models from different runs or even from consecutive epochs within the same run have significant variance; (2) Even with high variance, we can extract valuable information from training the super-net with shared weights; (3) The interference between child models is a main factor that induces high variance; (4) Properly reducing the degree of weight sharing could effectively reduce variance and improve performance.

Related articles: Most relevant | Search more
arXiv:1511.05497 [cs.LG] (Published 2015-11-17)
Learning the Architecture of Deep Neural Networks
arXiv:1710.10570 [cs.LG] (Published 2017-10-29)
Weight Initialization of Deep Neural Networks(DNNs) using Data Statistics
arXiv:1710.09282 [cs.LG] (Published 2017-10-23)
A Survey of Model Compression and Acceleration for Deep Neural Networks