arXiv Analytics

Sign in

arXiv:1805.08671 [stat.ML]AbstractReferencesReviewsResources

Adding One Neuron Can Eliminate All Bad Local Minima

Shiyu Liang, Ruoyu Sun, Jason D. Lee, R. Srikant

Published 2018-05-22Version 1

One of the main difficulties in analyzing neural networks is the non-convexity of the loss function which may have many bad local minima. In this paper, we study the landscape of neural networks for binary classification tasks. Under mild assumptions, we prove that after adding one special neuron with a skip connection to the output, or one special neuron per layer, every local minimum is a global minimum.

Related articles: Most relevant | Search more
arXiv:2105.02831 [stat.ML] (Published 2021-05-06)
The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima
arXiv:1901.03909 [stat.ML] (Published 2019-01-12)
Eliminating all bad Local Minima from Loss Landscapes without even adding an Extra Unit
arXiv:1605.07110 [stat.ML] (Published 2016-05-23)
Deep Learning without Poor Local Minima