arXiv Analytics

Sign in

arXiv:1905.12173 [stat.ML]AbstractReferencesReviewsResources

On the Inductive Bias of Neural Tangent Kernels

Alberto Bietti, Julien Mairal

Published 2019-05-29Version 1

State-of-the-art neural networks are heavily over-parameterized, making the optimization algorithm a crucial ingredient for learning predictive models with good generalization properties. A recent line of work has shown that in a certain over-parameterized regime, the learning dynamics of gradient descent are governed by a certain kernel obtained at initialization, called the neural tangent kernel. We study the inductive bias of learning in such a regime by analyzing this kernel and the corresponding function space (RKHS). In particular, we study smoothness, approximation, and stability properties of functions with finite norm, including stability to image deformations in the case of convolutional networks.

Related articles: Most relevant | Search more
arXiv:2005.11879 [stat.ML] (Published 2020-05-25)
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
arXiv:2106.05710 [stat.ML] (Published 2021-06-10)
DNN-Based Topology Optimisation: Spatial Invariance and Neural Tangent Kernel
arXiv:2107.12723 [stat.ML] (Published 2021-07-27)
Stability & Generalisation of Gradient Descent for Shallow Neural Networks without the Neural Tangent Kernel