arXiv:1508.04422 [stat.ML]AbstractReferencesReviewsResources
Scalable Out-of-Sample Extension of Graph Embeddings Using Deep Neural Networks
Aren Jansen, Gregory Sell, Vince Lyzinski
Published 2015-08-18Version 1
Several popular graph embedding techniques for representation learning and dimensionality reduction rely on performing computationally expensive eigendecompositions to derive a nonlinear transformation of the input data space. The resulting eigenvectors encode the embedding coordinates for the training samples only, preventing the transformation of novel data samples without recomputation. In this paper, we present a method for out-of sample extension of graph embeddings that uses deep neural networks (DNN) to parametrically approximate these nonlinear maps. Compared with traditional nonparametric out-of-sample extension methods, we demonstrate that the DNNs can generalize with equal or better fidelity and require orders of magnitude less computation at test time. Moreover, we find that unsupervised pretraining of the DNNs improves optimization for larger network sizes, thus removing sensitivity to model selection.