arXiv Analytics

Sign in

arXiv:1301.7401 [cs.LG]AbstractReferencesReviewsResources

An Experimental Comparison of Several Clustering and Initialization Methods

Marina Meila, David Heckerman

Published 2013-01-30, updated 2015-05-16Version 2

We examine methods for clustering in high dimensions. In the first part of the paper, we perform an experimental comparison between three batch clustering algorithms: the Expectation-Maximization (EM) algorithm, a winner take all version of the EM algorithm reminiscent of the K-means algorithm, and model-based hierarchical agglomerative clustering. We learn naive-Bayes models with a hidden root node, using high-dimensional discrete-variable data sets (both real and synthetic). We find that the EM algorithm significantly outperforms the other methods, and proceed to investigate the effect of various initialization schemes on the final solution produced by the EM algorithm. The initializations that we consider are (1) parameters sampled from an uninformative prior, (2) random perturbations of the marginal distribution of the data, and (3) the output of hierarchical agglomerative clustering. Although the methods are substantially different, they lead to learned models that are strikingly similar in quality.

Comments: Appears in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI1998)
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1310.5034 [cs.LG] (Published 2013-10-18, updated 2014-07-02)
A Theoretical and Experimental Comparison of the EM and SEM Algorithm
arXiv:2003.08820 [cs.LG] (Published 2020-03-13)
Experimental Comparison of Semi-parametric, Parametric, and Machine Learning Models for Time-to-Event Analysis Through the Concordance Index
arXiv:2211.16110 [cs.LG] (Published 2022-11-29)
PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison