arXiv Analytics

Sign in

arXiv:1903.02893 [cs.LG]AbstractReferencesReviewsResources

Only sparsity based loss function for learning representations

Vivek Bakaraju, Kishore Reddy Konda

Published 2019-03-07Version 1

We study the emergence of sparse representations in neural networks. We show that in unsupervised models with regularization, the emergence of sparsity is the result of the input data samples being distributed along highly non-linear or discontinuous manifold. We also derive a similar argument for discriminatively trained networks and present experiments to support this hypothesis. Based on our study of sparsity, we introduce a new loss function which can be used as regularization term for models like autoencoders and MLPs. Further, the same loss function can also be used as a cost function for an unsupervised single-layered neural network model for learning efficient representations.

Related articles: Most relevant | Search more
arXiv:1901.09178 [cs.LG] (Published 2019-01-26)
A general model for plane-based clustering with loss function
arXiv:1806.10069 [cs.LG] (Published 2018-06-26)
Deep $k$-Means: Jointly Clustering with $k$-Means and Learning Representations
arXiv:2004.00909 [cs.LG] (Published 2020-04-02)
Learning Representations For Images With Hierarchical Labels