arXiv Analytics

Sign in

arXiv:1901.02104 [cs.LG]AbstractReferencesReviewsResources

On the effect of the activation function on the distribution of hidden nodes in a deep network

Philip M. Long, Hanie Sedghi

Published 2019-01-07Version 1

We analyze the joint probability distribution on the lengths of the vectors of hidden variables in different layers of a fully connected deep network, when the weights and biases are chosen randomly according to Gaussian distributions, and the input is in $\{ -1, 1\}^N$. We show that, if the activation function $\phi$ satisfies a minimal set of assumptions, satisfied by all activation functions that we know that are used in practice, then, as the width of the network gets large, the `length process' converges in probability to a length map that is determined as a simple function of the variances of the random weights and biases, and the activation function $\phi$. We also show that this convergence may fail for $\phi$ that violate our assumptions.

Related articles: Most relevant | Search more
arXiv:1905.10585 [cs.LG] (Published 2019-05-25)
Hebbian-Descent
arXiv:1809.03272 [cs.LG] (Published 2018-09-10)
Privacy-Preserving Deep Learning for any Activation Function
arXiv:2409.14593 [cs.LG] (Published 2024-09-22)
Testing Causal Models with Hidden Variables in Polynomial Delay via Conditional Independencies