arXiv Analytics

Sign in

arXiv:1602.04485 [cs.LG]AbstractReferencesReviewsResources

Benefits of depth in neural networks

Matus Telgarsky

Published 2016-02-14Version 1

For any positive integer $k$, there exist neural networks with $\Theta(k^3)$ layers, $\Theta(1)$ nodes per layer, and $\Theta(1)$ distinct parameters which can not be approximated by networks with $\mathcal{O}(k)$ layers unless they are exponentially large --- they must possess $\Omega(2^k)$ nodes. This result is proved here for a class of nodes termed "semi-algebraic gates" which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with ReLU and maximization gates, and boosted decision trees (in this last case with a stronger separation: $\Omega(2^{k^3})$ total tree nodes are required).

Comments: For a simplified version, see http://arxiv.org/abs/1509.08101
Categories: cs.LG, cs.NE, stat.ML
Related articles: Most relevant | Search more
arXiv:1706.03301 [cs.LG] (Published 2017-06-11)
Neural networks and rational functions
arXiv:1706.02690 [cs.LG] (Published 2017-06-08)
Principled Detection of Out-of-Distribution Examples in Neural Networks
arXiv:1805.09370 [cs.LG] (Published 2018-05-23)
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients