arXiv Analytics

Sign in

arXiv:2108.03579 [cs.LG]AbstractReferencesReviewsResources

Expressive Power and Loss Surfaces of Deep Learning Models

Simant Dube

Published 2021-08-08Version 1

The goals of this paper are two-fold. The first goal is to serve as an expository tutorial on the working of deep learning models which emphasizes geometrical intuition about the reasons for success of deep learning. The second goal is to complement the current results on the expressive power of deep learning models and their loss surfaces with novel insights and results. In particular, we describe how deep neural networks carve out manifolds especially when the multiplication neurons are introduced. Multiplication is used in dot products and the attention mechanism and it is employed in capsule networks and self-attention based transformers. We also describe how random polynomial, random matrix, spin glass and computational complexity perspectives on the loss surfaces are interconnected.

Related articles: Most relevant | Search more
arXiv:2011.06796 [cs.LG] (Published 2020-11-13)
Wisdom of the Ensemble: Improving Consistency of Deep Learning Models
Lijing Wang et al.
arXiv:1811.11880 [cs.LG] (Published 2018-11-28)
Predicting the Computational Cost of Deep Learning Models
arXiv:2203.11196 [cs.LG] (Published 2022-03-18)
Performance of Deep Learning models with transfer learning for multiple-step-ahead forecasts in monthly time series