arXiv Analytics

Sign in

arXiv:1808.00408 [cond-mat.dis-nn]AbstractReferencesReviewsResources

Geometry of energy landscapes and the optimizability of deep neural networks

Simon Becker, Yao Zhang, Alpha A. Lee

Published 2018-08-01Version 1

Deep neural networks are workhorse models in machine learning with multiple layers of non-linear functions composed in series. Their loss function is highly non-convex, yet empirically even gradient descent minimisation is sufficient to arrive at accurate and predictive models. It is hitherto unknown why are deep neural networks easily optimizable. We analyze the energy landscape of a spin glass model of deep neural networks using random matrix theory and algebraic geometry. We analytically show that the multilayered structure holds the key to optimizability: Fixing the number of parameters and increasing network depth, the number of stationary points in the loss function decreases, minima become more clustered in parameter space, and the tradeoff between the depth and width of minima becomes less severe. Our analytical results are numerically verified through comparison with neural networks trained on a set of classical benchmark datasets. Our model uncovers generic design principles of machine learning models.

Related articles: Most relevant | Search more
arXiv:2306.12548 [cond-mat.dis-nn] (Published 2023-06-21)
Finite-time Lyapunov exponents of deep neural networks
arXiv:2407.19353 [cond-mat.dis-nn] (Published 2024-07-28)
A spring-block theory of feature learning in deep neural networks
arXiv:1809.09349 [cond-mat.dis-nn] (Published 2018-09-25)
The jamming transition as a paradigm to understand the loss landscape of deep neural networks