arXiv:1108.1169 [cs.CV]AbstractReferencesReviewsResources
Learning Representations by Maximizing Compression
Published 2011-08-04Version 1
We give an algorithm that learns a representation of data through compression. The algorithm 1) predicts bits sequentially from those previously seen and 2) has a structure and a number of computations similar to an autoencoder. The likelihood under the model can be calculated exactly, and arithmetic coding can be used directly for compression. When training on digits the algorithm learns filters similar to those of restricted boltzman machines and denoising autoencoders. Independent samples can be drawn from the model by a single sweep through the pixels. The algorithm has a good compression performance when compared to other methods that work under random ordering of pixels.
Comments: 8 pages, 3 figures
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1603.06668 [cs.CV] (Published 2016-03-22)
Learning Representations for Automatic Colorization
arXiv:2011.01819 [cs.CV] (Published 2020-11-03)
Learning Representations from Audio-Visual Spatial Alignment
arXiv:2212.14110 [cs.CV] (Published 2022-12-28)
Learning Representations for Masked Facial Recovery