arXiv Analytics

Sign in

arXiv:1211.0358 [stat.ML]AbstractReferencesReviewsResources

Deep Gaussian Processes

Andreas C. Damianou, Neil D. Lawrence

Published 2012-11-02, updated 2013-03-23Version 2

In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.

Comments: 9 pages, 8 figures. Appearing in Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS) 2013
Categories: stat.ML, cs.LG, math.PR
Subjects: 60G15, 58E30, G.3, G.1.2, I.2.6
Related articles: Most relevant | Search more
arXiv:2011.01226 [stat.ML] (Published 2020-11-02)
Sample-efficient reinforcement learning using deep Gaussian processes
arXiv:1707.03909 [stat.ML] (Published 2017-07-12)
Model Selection for Anomaly Detection
arXiv:1804.07344 [stat.ML] (Published 2018-04-19)
Effects of sampling skewness of the importance-weighted risk estimator on model selection