arXiv Analytics

Sign in

arXiv:1508.04422 [stat.ML]AbstractReferencesReviewsResources

Scalable Out-of-Sample Extension of Graph Embeddings Using Deep Neural Networks

Aren Jansen, Gregory Sell, Vince Lyzinski

Published 2015-08-18Version 1

Several popular graph embedding techniques for representation learning and dimensionality reduction rely on performing computationally expensive eigendecompositions to derive a nonlinear transformation of the input data space. The resulting eigenvectors encode the embedding coordinates for the training samples only, preventing the transformation of novel data samples without recomputation. In this paper, we present a method for out-of sample extension of graph embeddings that uses deep neural networks (DNN) to parametrically approximate these nonlinear maps. Compared with traditional nonparametric out-of-sample extension methods, we demonstrate that the DNNs can generalize with equal or better fidelity and require orders of magnitude less computation at test time. Moreover, we find that unsupervised pretraining of the DNNs improves optimization for larger network sizes, thus removing sensitivity to model selection.

Related articles: Most relevant | Search more
arXiv:1402.1869 [stat.ML] (Published 2014-02-08, updated 2014-06-07)
On the Number of Linear Regions of Deep Neural Networks
arXiv:1509.07385 [stat.ML] (Published 2015-09-24)
Provable approximation properties for deep neural networks
arXiv:1611.08083 [stat.ML] (Published 2016-11-24)
Survey of Expressivity in Deep Neural Networks