arXiv Analytics

Sign in

arXiv:2007.04803 [stat.ML]AbstractReferencesReviewsResources

Online Approximate Bayesian learning

Mathieu Gerber, Randal Douc

Published 2020-07-09Version 1

We introduce in this work a new approach for online approximate Bayesian learning. The main idea of the proposed method is to approximate the sequence $(\pi_t)_{t\geq 1}$ of posterior distributions by a sequence $(\tilde{\pi}_t)_{t\geq 1}$ which (i) can be estimated in an online fashion using sequential Monte Carlo methods and (ii) is shown to converge to the same distribution as the sequence $(\pi_t)_{t\geq 1}$, under weak assumptions on the statistical model at hand. In its simplest version, $(\tilde{\pi}_t)_{t\geq 1}$ is the sequence of filtering distributions associated to a particular state-space model, which can therefore be approximated using a standard particle filter algorithm. We illustrate on several challenging examples the benefits of this approach for approximate Bayesian parameter inference, and with one real data example we show that its online predictive performance can significantly outperform that of stochastic gradient descent and streaming variational Bayes.

Related articles: Most relevant | Search more
arXiv:1706.00098 [stat.ML] (Published 2017-05-31)
Bayesian $l_0$ Regularized Least Squares
arXiv:1907.10477 [stat.ML] (Published 2019-07-24)
On the relationship between variational inference and adaptive importance sampling
arXiv:1602.01120 [stat.ML] (Published 2016-02-02)
On the Nyström and Column-Sampling Methods for the Approximate Principal Components Analysis of Large Data Sets