arXiv Analytics

Sign in

arXiv:2307.03364 [cs.LG]AbstractReferencesReviewsResources

Distilled Pruning: Using Synthetic Data to Win the Lottery

Luke McDermott, Daniel Cummings

Published 2023-07-07Version 1

This work introduces a novel approach to pruning deep learning models by using distilled data. Unlike conventional strategies which primarily focus on architectural or algorithmic optimization, our method reconsiders the role of data in these scenarios. Distilled datasets capture essential patterns from larger datasets, and we demonstrate how to leverage this capability to enable a computationally efficient pruning process. Our approach can find sparse, trainable subnetworks (a.k.a. Lottery Tickets) up to 5x faster than Iterative Magnitude Pruning at comparable sparsity on CIFAR-10. The experimental results highlight the potential of using distilled data for resource-efficient neural network pruning, model compression, and neural architecture search.

Related articles: Most relevant | Search more
arXiv:2006.07556 [cs.LG] (Published 2020-06-13)
Neural Architecture Search using Bayesian Optimisation with Weisfeiler-Lehman Kernel
arXiv:2007.04785 [cs.LG] (Published 2020-07-09)
Neural Architecture Search with GBDT
arXiv:1802.07191 [cs.LG] (Published 2018-02-11)
Neural Architecture Search with Bayesian Optimisation and Optimal Transport