arXiv Analytics

Sign in

arXiv:1505.02065 [stat.ML]AbstractReferencesReviewsResources

Improving Gibbs Sampling Predictions on Unseen Data for Latent Dirichlet Allocation

Yannis Papanikolaou, Timothy N. Rubin, Grigorios Tsoumakas

Published 2015-05-08Version 1

Latent Dirichlet Allocation (LDA) is a model for discovering the underlying structure of a given data set. LDA and its extensions have been used in unsupervised and supervised learning tasks across a variety of data types including textual, image and biological data. Several methods have been presented for approximate inference of LDA parameters, including Variational Bayes (VB), Collapsed Gibbs Sampling (CGS) and Collapsed Variational Bayes (CVB) techniques. This work explores three novel methods for generating LDA predictions on unobserved data, given a model trained by CGS. We present extensive experiments on real-world data sets for both standard unsupervised LDA and Prior LDA, one of the supervised variants of LDA for multi-label data. In both supervised and unsupervised settings, we perform extensive empirical comparison of our prediction methods with the standard predictions generated by CGS and CVB0 (a variant of CVB). The results show a consistent advantage of one of our methods over CGS under all experimental conditions, and over CVB0 under the majority of conditions.

Related articles: Most relevant | Search more
arXiv:2202.05650 [stat.ML] (Published 2022-02-11)
Bernstein Flows for Flexible Posteriors in Variational Bayes
arXiv:1905.10859 [stat.ML] (Published 2019-05-26)
Variational Bayes under Model Misspecification
arXiv:1903.00617 [stat.ML] (Published 2019-03-02)
Approximation Properties of Variational Bayes for Vector Autoregressions