arXiv Analytics

Sign in

arXiv:1211.3024 [cond-mat.dis-nn]AbstractReferencesReviewsResources

Generalization learning in a perceptron with binary synapses

Carlo Baldassi

Published 2012-11-13Version 1

We consider the generalization problem for a perceptron with binary synapses, implementing the Stochastic Belief-Propagation-Inspired (SBPI) learning algorithm which we proposed earlier, and perform a mean-field calculation to obtain a differential equation which describes the behaviour of the device in the limit of a large number of synapses N. We show that the solving time of SBPI is of order N*sqrt(log(N)), while the similar, well-known clipped perceptron (CP) algorithm does not converge to a solution at all in the time frame we considered. The analysis gives some insight into the ongoing process and shows that, in this context, the SBPI algorithm is equivalent to a new, simpler algorithm, which only differs from the CP algorithm by the addition of a stochastic, unsupervised meta-plastic reinforcement process, whose rate of application must be less than sqrt(2/(\pi * N)) for the learning to be achieved effectively. The analytical results are confirmed by simulations.

Comments: 16 pages, 4 figures
Journal: Journal of Statistical Physics 136 (2009) 902-916
Categories: cond-mat.dis-nn
Subjects: F.2.2, I.2.6, I.5.1
Related articles:
arXiv:1105.1651 [cond-mat.dis-nn] (Published 2011-05-09, updated 2011-07-11)
Combined local search strategy for learning in networks of binary synapses
arXiv:1408.1784 [cond-mat.dis-nn] (Published 2014-08-08)
Origin of the computational hardness for learning with binary synapses
arXiv:1408.2725 [cond-mat.dis-nn] (Published 2014-08-12, updated 2014-10-20)
Notes on stochastic (bio)-logic gates: the role of allosteric cooperativity