arXiv Analytics

Sign in

arXiv:1802.09210 [stat.ML]AbstractReferencesReviewsResources

A representer theorem for deep neural networks

Michael Unser

Published 2018-02-26Version 1

We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots. The bottom line is that the action of each neuron is encoded by a spline whose parameters (including the number of knots) are optimized during the training procedure. The scheme results in a computational structure that is compatible with the existing deep-ReLU and MaxOut architectures. It also suggests novel optimization challenges, while making the link with $\ell_1$ minimization and sparsity-promoting techniques explicit.

Related articles: Most relevant | Search more
arXiv:1912.08526 [stat.ML] (Published 2019-12-18)
Analytic expressions for the output evolution of a deep neural network
arXiv:1712.07042 [stat.ML] (Published 2017-12-19)
Pafnucy -- A deep neural network for structure-based drug discovery
arXiv:2307.06581 [stat.ML] (Published 2023-07-13)
Deep Neural Networks for Semiparametric Frailty Models via H-likelihood