arXiv Analytics

Sign in

arXiv:cond-mat/0407436AbstractReferencesReviewsResources

Neural Networks Processing Mean Values of Random Variables

M. J. Barber, J. W. Clark, C. H. Anderson

Published 2004-07-16Version 1

We introduce a class of neural networks derived from probabilistic models in the form of Bayesian belief networks. By imposing additional assumptions about the nature of the probabilistic models represented in the belief networks, we derive neural networks with standard dynamics that require no training to determine the synaptic weights, that can pool multiple sources of evidence, and that deal cleanly and consistently with inconsistent or contradictory evidence. The presented neural networks capture many properties of Bayesian belief networks, providing distributed versions of probabilistic models.

Comments: 7 pages, 3 figures, 1 table, submitted to Phys Rev E
Categories: cond-mat.dis-nn
Related articles: Most relevant | Search more
arXiv:1004.5326 [cond-mat.dis-nn] (Published 2010-04-29)
Designing neural networks that process mean values of random variables
arXiv:cond-mat/0102274 (Published 2001-02-15)
Tractable approximations for probabilistic models: The adaptive TAP mean field approach
arXiv:cond-mat/9910202 (Published 1999-10-13)
Central limit theorems for nonlinear hierarchical sequences of random variables