arXiv Analytics

Sign in

arXiv:1302.3580 [cs.LG]AbstractReferencesReviewsResources

Asymptotic Model Selection for Directed Networks with Hidden Variables

Dan Geiger, David Heckerman, Christopher Meek

Published 2013-02-13, updated 2015-05-16Version 2

We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the complexity of a model according to the dimension of its parameters. We argue that the dimension of a Bayesian network with hidden variables is the rank of the Jacobian matrix of the transformation between the parameters of the network and the parameters of the observable variables. We compute the dimensions of several networks including the naive Bayes model with a hidden root node.

Comments: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996)
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1301.7376 [cs.LG] (Published 2013-01-30)
Graphical Models and Exponential Families
arXiv:1206.6862 [cs.LG] (Published 2012-06-27)
On the Number of Samples Needed to Learn the Correct Structure of a Bayesian Network
arXiv:2409.14593 [cs.LG] (Published 2024-09-22)
Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies