arXiv:1804.10253 [stat.ML]AbstractReferencesReviewsResources
From Principal Subspaces to Principal Components with Linear Autoencoders
Published 2018-04-26Version 1
The autoencoder is an effective unsupervised learning model which is widely used in deep learning. It is well known that an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors. In this paper, we show how to recover the loading vectors from the autoencoder weights.
Related articles:
arXiv:2211.03054 [stat.ML] (Published 2022-11-06)
The Importance of Suppressing Complete Reconstruction in Autoencoders for Unsupervised Outlier Detection
arXiv:1605.05918 [stat.ML] (Published 2016-05-19)
Bayesian Variable Selection for Globally Sparse Probabilistic PCA
arXiv:1111.1788 [stat.ML] (Published 2011-11-08)
Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization