arXiv Analytics

Sign in

arXiv:2104.00415 [cs.LG]AbstractReferencesReviewsResources

Learning with Neural Tangent Kernels in Near Input Sparsity Time

Amir Zandieh

Published 2021-04-01Version 1

The Neural Tangent Kernel (NTK) characterizes the behavior of infinitely wide neural nets trained under least squares loss by gradient descent (Jacot et al., 2018). However, despite its importance, the super-quadratic runtime of kernel methods limits the use of NTK in large-scale learning tasks. To accelerate kernel machines with NTK, we propose a near input sparsity time algorithm that maps the input data to a randomized low-dimensional feature space so that the inner product of the transformed data approximates their NTK evaluation. Furthermore, we propose a feature map for approximating the convolutional counterpart of the NTK (Arora et al., 2019), which can transform any image using a runtime that is only linear in the number of pixels. We show that in standard large-scale regression and classification tasks a linear regressor trained on our features outperforms trained NNs and Nystrom method with NTK kernels.

Related articles: Most relevant | Search more
arXiv:2310.14062 [cs.LG] (Published 2023-10-21)
On the Neural Tangent Kernel of Equilibrium Models
arXiv:2305.16427 [cs.LG] (Published 2023-05-25)
Neural (Tangent Kernel) Collapse
arXiv:2312.02236 [cs.LG] (Published 2023-12-04)
Rethinking Adversarial Training with Neural Tangent Kernel