arXiv Analytics

Sign in

arXiv:1803.01206 [cs.LG]AbstractReferencesReviewsResources

On the Power of Over-parametrization in Neural Networks with Quadratic Activation

Simon S. Du, Jason D. Lee

Published 2018-03-03Version 1

We provide new theoretical insights on why over-parametrization is effective in learning neural networks. For a $k$ hidden node shallow network with quadratic activation and $n$ training data points, we show as long as $ k \ge \sqrt{2n}$, over-parametrization enables local search algorithms to find a \emph{globally} optimal solution for general smooth and convex loss functions. Further, despite that the number of parameters may exceed the sample size, using theory of Rademacher complexity, we show with weight decay, the solution also generalizes well if the data is sampled from a regular distribution such as Gaussian. To prove when $k\ge \sqrt{2n}$, the loss function has benign landscape properties, we adopt an idea from smoothed analysis, which may have other applications in studying loss surfaces of neural networks.

Related articles: Most relevant | Search more
arXiv:1803.00909 [cs.LG] (Published 2018-02-19)
Understanding the Loss Surface of Neural Networks for Binary Classification
arXiv:1901.02322 [cs.LG] (Published 2019-01-08)
Fusion Strategies for Learning User Embeddings with Neural Networks
arXiv:1602.05897 [cs.LG] (Published 2016-02-18)
Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity