arXiv Analytics

Sign in

arXiv:1811.02067 [cs.LG]AbstractReferencesReviewsResources

Generalization Bounds for Neural Networks: Kernels, Symmetry, and Sample Compression

Christopher Snyder, Sriram Vishwanath

Published 2018-11-05Version 1

Though Deep Neural Networks (DNNs) are widely celebrated for their practical performance, they demonstrate many intriguing phenomena related to depth that are difficult to explain both theoretically and intuitively. Understanding how weights in deep networks coordinate together across layers to form useful learners has proven somewhat intractable, in part because of the repeated composition of nonlinearities induced by depth. We present a reparameterization of DNNs as a linear function of a particular feature map that is locally independent of the weights. This feature map transforms depth-dependencies into simple {\em tensor} products and maps each input to a discrete subset of the feature space. Then, in analogy with logistic regression, we propose a max-margin assumption that enables us to present a so-called {\em sample compression} representation of the neural network in terms of the discrete activation state of neurons induced by s "support vectors". We show how the number of support vectors relate to learning guarantees for neural networks through sample compression bounds, yielding a sample complexity O(ns/\epsilon) for networks with n neurons. Additionally, this number of support vectors has monotonic dependence on width, depth, and label noise for simple networks trained on the MNIST dataset.

Related articles: Most relevant | Search more
arXiv:1905.11488 [cs.LG] (Published 2019-05-27)
Generalization Bounds in the Predict-then-Optimize Framework
arXiv:1610.07883 [cs.LG] (Published 2016-10-25)
Generalization Bounds for Weighted Automata
arXiv:2210.00960 [cs.LG] (Published 2022-10-03)
Stability Analysis and Generalization Bounds of Adversarial Training