arXiv Analytics

Sign in

arXiv:2410.08339 [cs.LG]AbstractReferencesReviewsResources

Simultaneous Weight and Architecture Optimization for Neural Networks

Zitong Huang, Mansooreh Montazerin, Ajitesh Srivastava

Published 2024-10-10Version 1

Neural networks are trained by choosing an architecture and training the parameters. The choice of architecture is often by trial and error or with Neural Architecture Search (NAS) methods. While NAS provides some automation, it often relies on discrete steps that optimize the architecture and then train the parameters. We introduce a novel neural network training framework that fundamentally transforms the process by learning architecture and parameters simultaneously with gradient descent. With the appropriate setting of the loss function, it can discover sparse and compact neural networks for given datasets. Central to our approach is a multi-scale encoder-decoder, in which the encoder embeds pairs of neural networks with similar functionalities close to each other (irrespective of their architectures and weights). To train a neural network with a given dataset, we randomly sample a neural network embedding in the embedding space and then perform gradient descent using our custom loss function, which incorporates a sparsity penalty to encourage compactness. The decoder generates a neural network corresponding to the embedding. Experiments demonstrate that our framework can discover sparse and compact neural networks maintaining a high performance.

Comments: Accepted to NeurIPS 2024 FITML (Fine-Tuning in Modern Machine Learning) Workshop
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:2010.02626 [cs.LG] (Published 2020-10-06)
A Novel Neural Network Training Framework with Data Assimilation
arXiv:2206.00843 [cs.LG] (Published 2022-06-02)
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Yonggan Fu et al.
arXiv:2204.01701 [cs.LG] (Published 2022-04-01)
QuadraLib: A Performant Quadratic Neural Network Library for Architecture Optimization and Design Exploration