arXiv Analytics

Sign in

arXiv:1602.05897 [cs.LG]AbstractReferencesReviewsResources

Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity

Amit Daniely, Roy Frostig, Yoram Singer

Published 2016-02-18Version 1

We develop a general duality between neural networks and compositional kernels, striving towards a better understanding of deep learning. We show that initial representations generated by common random initializations are sufficiently rich to express all functions in the dual kernel space. Hence, though the training objective is hard to optimize in the worst case, the initial weights form a good starting point for optimization. Our dual view also reveals a pragmatic and aesthetic perspective of neural networks and underscores their expressive power.

Related articles: Most relevant | Search more
arXiv:1612.00796 [cs.LG] (Published 2016-12-02)
Overcoming catastrophic forgetting in neural networks
arXiv:1803.00909 [cs.LG] (Published 2018-02-19)
Understanding the Loss Surface of Neural Networks for Binary Classification
arXiv:1803.01206 [cs.LG] (Published 2018-03-03)
On the Power of Over-parametrization in Neural Networks with Quadratic Activation