arXiv Analytics

Sign in

arXiv:2309.13077 [cs.LG]AbstractReferencesReviewsResources

A Differentiable Framework for End-to-End Learning of Hybrid Structured Compression

Moonjung Eo, Suhyun Kang, Wonjong Rhee

Published 2023-09-21Version 1

Filter pruning and low-rank decomposition are two of the foundational techniques for structured compression. Although recent efforts have explored hybrid approaches aiming to integrate the advantages of both techniques, their performance gains have been modest at best. In this study, we develop a \textit{Differentiable Framework~(DF)} that can express filter selection, rank selection, and budget constraint into a single analytical formulation. Within the framework, we introduce DML-S for filter selection, integrating scheduling into existing mask learning techniques. Additionally, we present DTL-S for rank selection, utilizing a singular value thresholding operator. The framework with DML-S and DTL-S offers a hybrid structured compression methodology that facilitates end-to-end learning through gradient-base optimization. Experimental results demonstrate the efficacy of DF, surpassing state-of-the-art structured compression methods. Our work establishes a robust and versatile avenue for advancing structured compression techniques.

Comments: 11 pages, 5 figures, 6 tables
Categories: cs.LG, cs.AI, eess.IV
Related articles: Most relevant | Search more
arXiv:1612.08810 [cs.LG] (Published 2016-12-28)
The Predictron: End-To-End Learning and Planning
David Silver et al.
arXiv:2501.14991 [cs.LG] (Published 2025-01-24)
Advances in Set Function Learning: A Survey of Techniques and Applications
arXiv:1805.06523 [cs.LG] (Published 2018-05-16)
End-to-end Learning of a Convolutional Neural Network via Deep Tensor Decomposition