arXiv Analytics

Sign in

arXiv:1511.03243 [stat.ML]AbstractReferencesReviewsResources

Black-box $α$-divergence Minimization

José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner

Published 2015-11-10Version 1

We present black-box alpha (BB-$\alpha$), an approximate inference method based on the minimization of $\alpha$-divergences between probability distributions. BB-$\alpha$ scales to large datasets since it can be implemented using stochastic gradient descent. BB-$\alpha$ can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automatic differentiation. By tuning the parameter $\alpha$, we are able to interpolate between variational Bayes and an expectation propagation like algorithm. Experiments on probit and neural network regression problems illustrate the accuracy of the posterior approximations obtained with BB-$\alpha$.

Comments: To be presented at NIPS workshops on Advances in Approximate Bayesian Inference and Black Box Learning and Inference
Categories: stat.ML
Related articles: Most relevant | Search more
arXiv:2210.02882 [stat.ML] (Published 2022-10-06)
Scaling up Stochastic Gradient Descent for Non-convex Optimisation
arXiv:2502.06719 [stat.ML] (Published 2025-02-10)
Gaussian Approximation and Multiplier Bootstrap for Stochastic Gradient Descent
arXiv:1511.02187 [stat.ML] (Published 2015-11-06)
Streaming regularization parameter selection via stochastic gradient descent