arXiv Analytics

Sign in

arXiv:2105.02831 [stat.ML]AbstractReferencesReviewsResources

The layer-wise L1 Loss Landscape of Neural Nets is more complex around local minima

Peter Hinz

Published 2021-05-06Version 1

For fixed training data and network parameters in the other layers the L1 loss of a ReLU neural network as a function of the first layer's parameters is a piece-wise affine function. We use the Deep ReLU Simplex algorithm to iteratively minimize the loss monotonically on adjacent vertices and analyze the trajectory of these vertex positions. We empirically observe that in a neighbourhood around a local minimum, the iterations behave differently such that conclusions on loss level and proximity of the local minimum can be made before it has been found: Firstly the loss seems to decay exponentially slow at iterated adjacent vertices such that the loss level at the local minimum can be estimated from the loss levels of subsequently iterated vertices, and secondly we observe a strong increase of the vertex density around local minima. This could have far-reaching consequences for the design of new gradient-descent algorithms that might improve convergence rate by exploiting these facts.

Comments: 4 pages, 5 figures
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2310.00327 [stat.ML] (Published 2023-09-30)
Memorization with neural nets: going beyond the worst case
arXiv:1805.08671 [stat.ML] (Published 2018-05-22)
Adding One Neuron Can Eliminate All Bad Local Minima
arXiv:1605.07110 [stat.ML] (Published 2016-05-23)
Deep Learning without Poor Local Minima