arXiv Analytics

Sign in

arXiv:1602.04133 [stat.ML]AbstractReferencesReviewsResources

Deep Gaussian Processes for Regression using Approximate Expectation Propagation

Thang D. Bui, Daniel Hernández-Lobato, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner

Published 2016-02-12Version 1

Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers. DGPs are nonparametric probabilistic models and as such are arguably more flexible, have a greater capacity to generalise, and provide better calibrated uncertainty estimates than alternative deep models. This paper develops a new approximate Bayesian learning scheme that enables DGPs to be applied to a range of medium to large scale regression problems for the first time. The new method uses an approximate Expectation Propagation procedure and a novel and efficient extension of the probabilistic backpropagation algorithm for learning. We evaluate the new method for non-linear regression on eleven real-world datasets, showing that it always outperforms GP regression and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks. As a by-product, this work provides a comprehensive analysis of six approximate Bayesian methods for training neural networks.

Related articles: Most relevant | Search more
arXiv:2505.11355 [stat.ML] (Published 2025-05-16)
STRIDE: Sparse Techniques for Regression in Deep Gaussian Processes
arXiv:1211.0358 [stat.ML] (Published 2012-11-02, updated 2013-03-23)
Deep Gaussian Processes
arXiv:1805.01867 [stat.ML] (Published 2018-05-04)
Bayesian active learning for choice models with deep Gaussian processes