arXiv Analytics

Sign in

arXiv:1902.02366 [cs.LG]AbstractReferencesReviewsResources

Negative eigenvalues of the Hessian in deep neural networks

Guillaume Alain, Nicolas Le Roux, Pierre-Antoine Manzagol

Published 2019-02-06Version 1

The loss function of deep networks is known to be non-convex but the precise nature of this nonconvexity is still an active area of research. In this work, we study the loss landscape of deep networks through the eigendecompositions of their Hessian matrix. In particular, we examine how important the negative eigenvalues are and the benefits one can observe in handling them appropriately.

Related articles: Most relevant | Search more
arXiv:1907.08475 [cs.LG] (Published 2019-07-19)
Representational Capacity of Deep Neural Networks -- A Computing Study
arXiv:1905.09680 [cs.LG] (Published 2019-05-23)
DEEP-BO for Hyperparameter Optimization of Deep Networks
arXiv:1904.08050 [cs.LG] (Published 2019-04-17)
Sparseout: Controlling Sparsity in Deep Networks