arXiv Analytics

Sign in

arXiv:1301.2269 [cs.LG]AbstractReferencesReviewsResources

Learning the Dimensionality of Hidden Variables

Gal Elidan, Nir Friedman

Published 2013-01-10Version 1

A serious problem in learning probabilistic models is the presence of hidden variables. These variables are not observed, yet interact with several of the observed variables. Detecting hidden variables poses two problems: determining the relations to other variables in the model and determining the number of states of the hidden variable. In this paper, we address the latter problem in the context of Bayesian networks. We describe an approach that utilizes a score-based agglomerative state-clustering. As we show, this approach allows us to efficiently evaluate models with a range of cardinalities for the hidden variable. We show how to extend this procedure to deal with multiple interacting hidden variables. We demonstrate the effectiveness of this approach by evaluating it on synthetic and real-life data. We show that our approach learns models with hidden variables that generalize better and have better structure than previous approaches.

Comments: Appears in Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI2001)
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:2007.04440 [cs.LG] (Published 2020-07-08)
On the relationship between class selectivity, dimensionality, and robustness
arXiv:2502.05360 [cs.LG] (Published 2025-02-07)
Curse of Dimensionality in Neural Network Optimization
arXiv:1701.00831 [cs.LG] (Published 2017-01-03)
Collapsing of dimensionality