arXiv Analytics

Sign in

arXiv:1607.04917 [cs.LG]AbstractReferencesReviewsResources

Piecewise convexity of artificial neural networks

Blaine Rister

Published 2016-07-17Version 1

Although artificial neural networks have shown great promise in applications ranging from computer vision to speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees concerning networks with continuous piecewise affine activation functions, which have in recent years become the norm. We prove three main results. Firstly, that the network is piecewise convex as a function of the input data. Secondly, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Finally, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. Accordingly, we show that any point to which gradient descent converges is a local minimum of some piece. Thus gradient descent converges to non-minima only at the boundaries of pieces. These results might offer some insights into the effectiveness of gradient descent methods in optimizing this class of networks.

Related articles: Most relevant | Search more
arXiv:2006.02909 [cs.LG] (Published 2020-06-03)
Assessing Intelligence in Artificial Neural Networks
arXiv:2102.02153 [cs.LG] (Published 2021-02-03)
Fast Concept Mapping: The Emergence of Human Abilities in Artificial Neural Networks when Learning Embodied and Self-Supervised
arXiv:1705.01040 [cs.LG] (Published 2017-04-28)
Maximum Resilience of Artificial Neural Networks