{ "id": "1510.08628", "version": "v1", "published": "2015-10-29T10:33:20.000Z", "updated": "2015-10-29T10:33:20.000Z", "title": "WarpLDA: a Simple and Efficient O(1) Algorithm for Latent Dirichlet Allocation", "authors": [ "Jianfei Chen", "Kaiwei Li", "Jun Zhu", "Wenguang Chen" ], "categories": [ "stat.ML", "cs.DC", "cs.IR", "cs.LG" ], "abstract": "Developing efficient and scalable algorithms for Latent Dirichlet Allocation (LDA) is of wide interest for many applications. Previous work has developed an $O(1)$ Metropolis-Hastings sampling method for each token. However, the performance is far from being optimal due to random accesses to the parameter matrices and frequent cache misses. In this paper, we propose WarpLDA, a novel $O(1)$ sampling algorithm for LDA. WarpLDA is a Metropolis-Hastings based algorithm which is designed to optimize the cache hit rate. Advantages of WarpLDA include 1) Efficiency and scalability: WarpLDA has good locality and carefully designed partition method, and can be scaled to hundreds of machines; 2) Simplicity: WarpLDA does not have any complicated modules such as alias tables, hybrid data structures, or parameter servers, making it easy to understand and implement; 3) Robustness: WarpLDA is consistently faster than other algorithms, under various settings from small-scale to massive-scale dataset and model. WarpLDA is 5-15x faster than state-of-the-art LDA samplers, implying less cost of time and money. With WarpLDA users can learn up to one million topics from hundreds of millions of documents in a few hours, at the speed of 2G tokens per second, or learn topics from small-scale datasets in seconds.", "revisions": [ { "version": "v1", "updated": "2015-10-29T10:33:20.000Z" } ], "analyses": { "subjects": [ "H.3.4", "G.3", "G.4" ], "keywords": [ "latent dirichlet allocation", "state-of-the-art lda samplers", "frequent cache misses", "hybrid data structures", "cache hit rate" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }