arXiv:1511.03243 [stat.ML]AbstractReferencesReviewsResources
Black-box $α$-divergence Minimization
José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner
Published 2015-11-10Version 1
We present black-box alpha (BB-$\alpha$), an approximate inference method based on the minimization of $\alpha$-divergences between probability distributions. BB-$\alpha$ scales to large datasets since it can be implemented using stochastic gradient descent. BB-$\alpha$ can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automatic differentiation. By tuning the parameter $\alpha$, we are able to interpolate between variational Bayes and an expectation propagation like algorithm. Experiments on probit and neural network regression problems illustrate the accuracy of the posterior approximations obtained with BB-$\alpha$.