arXiv:2302.01976 [cs.LG]AbstractReferencesReviewsResources
SPARLING: Learning Latent Representations with Extremely Sparse Activations
Kavi Gupta, Osbert Bastani, Armando Solar-Lezama
Published 2023-02-03Version 1
Real-world processes often contain intermediate state that can be modeled as an extremely sparse tensor. We introduce Sparling, a new kind of informational bottleneck that explicitly models this state by enforcing extreme activation sparsity. We additionally demonstrate that this technique can be used to learn the true intermediate representation with no additional supervision (i.e., from only end-to-end labeled examples), and thus improve the interpretability of the resulting models. On our DigitCircle domain, we are able to get an intermediate state prediction accuracy of 98.84%, even as we only train end-to-end.
Comments: 8 pages, 10 figures
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:2412.05175 [cs.LG] (Published 2024-12-06)
Variational Encoder-Decoders for Learning Latent Representations of Physical Systems
arXiv:1802.03063 [cs.LG] (Published 2018-02-08)
Learning Latent Representations in Neural Networks for Clustering through Pseudo Supervision and Graph-based Activity Regularization
arXiv:1905.05335 [cs.LG] (Published 2019-05-14)
Correlated Variational Auto-Encoders