arXiv Analytics

Sign in

arXiv:2502.06007 [stat.ML]AbstractReferencesReviewsResources

Transformers versus the EM Algorithm in Multi-class Clustering

Yihan He, Hong-Yu Chen, Yuan Cao, Jianqing Fan, Han Liu

Published 2025-02-09Version 1

LLMs demonstrate significant inference capacities in complicated machine learning tasks, using the Transformer model as its backbone. Motivated by the limited understanding of such models on the unsupervised learning problems, we study the learning guarantees of Transformers in performing multi-class clustering of the Gaussian Mixture Models. We develop a theory drawing strong connections between the Softmax Attention layers and the workflow of the EM algorithm on clustering the mixture of Gaussians. Our theory provides approximation bounds for the Expectation and Maximization steps by proving the universal approximation abilities of multivariate mappings by Softmax functions. In addition to the approximation guarantees, we also show that with a sufficient number of pre-training samples and an initialization, Transformers can achieve the minimax optimal rate for the problem considered. Our extensive simulations empirically verified our theory by revealing the strong learning capacities of Transformers even beyond the assumptions in the theory, shedding light on the powerful inference capacities of LLMs.

Related articles: Most relevant | Search more
arXiv:1701.03268 [stat.ML] (Published 2017-01-12)
Relaxation of the EM Algorithm via Quantum Annealing for Gaussian Mixture Models
arXiv:1701.08946 [stat.ML] (Published 2017-01-31)
Variable selection for clustering with Gaussian mixture models: state of the art
arXiv:2302.14599 [stat.ML] (Published 2023-02-28)
Scalable Clustering: Large Scale Unsupervised Learning of Gaussian Mixture Models with Outliers