arXiv Analytics

Sign in

arXiv:1004.5326 [cond-mat.dis-nn]AbstractReferencesReviewsResources

Designing neural networks that process mean values of random variables

Michael J. Barber, John W. Clark

Published 2010-04-29Version 1

We introduce a class of neural networks derived from probabilistic models in the form of Bayesian networks. By imposing additional assumptions about the nature of the probabilistic models represented in the networks, we derive neural networks with standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the random variables, that can pool multiple sources of evidence, and that deal cleanly and consistently with inconsistent or contradictory evidence. The presented neural networks capture many properties of Bayesian networks, providing distributed versions of probabilistic models.

Related articles: Most relevant | Search more
arXiv:cond-mat/0407436 (Published 2004-07-16)
Neural Networks Processing Mean Values of Random Variables
arXiv:cond-mat/0102274 (Published 2001-02-15)
Tractable approximations for probabilistic models: The adaptive TAP mean field approach
arXiv:cond-mat/9910202 (Published 1999-10-13)
Central limit theorems for nonlinear hierarchical sequences of random variables