arXiv Analytics

Sign in

arXiv:1706.04265 [cs.LG]AbstractReferencesReviewsResources

Transfer entropy-based feedback improves performance in artificial neural networks

Sebastian Herzog, Christian Tetzlaff, Florentin Wörgötter

Published 2017-06-13Version 1

The structure of the majority of modern deep neural networks is characterized by uni- directional feed-forward connectivity across a very large number of layers. By contrast, the architecture of the cortex of vertebrates contains fewer hierarchical levels but many recurrent and feedback connections. Here we show that a small, few-layer artificial neural network that employs feedback will reach top level performance on a standard benchmark task, otherwise only obtained by large feed-forward structures. To achieve this we use feed-forward transfer entropy between neurons to structure feedback connectivity. Transfer entropy can here intuitively be understood as a measure for the relevance of certain pathways in the network, which are then amplified by feedback. Feedback may therefore be key for high network performance in small brain-like architectures.

Related articles: Most relevant | Search more
arXiv:1612.01589 [cs.LG] (Published 2016-12-05)
Improving the Performance of Neural Networks in Regression Tasks Using Drawering
arXiv:1612.03450 [cs.LG] (Published 2016-12-11)
Noisy subspace clustering via matching pursuits
arXiv:cs/0211003 [cs.LG] (Published 2002-11-01)
Evaluation of the Performance of the Markov Blanket Bayesian Classifier Algorithm