arXiv Analytics

Sign in

arXiv:2501.08679 [cs.LG]AbstractReferencesReviewsResources

Diagonal Over-parameterization in Reproducing Kernel Hilbert Spaces as an Adaptive Feature Model: Generalization and Adaptivity

Yicheng Li, Qian Lin

Published 2025-01-15Version 1

This paper introduces a diagonal adaptive kernel model that dynamically learns kernel eigenvalues and output coefficients simultaneously during training. Unlike fixed-kernel methods tied to the neural tangent kernel theory, the diagonal adaptive kernel model adapts to the structure of the truth function, significantly improving generalization over fixed-kernel methods, especially when the initial kernel is misaligned with the target. Moreover, we show that the adaptivity comes from learning the right eigenvalues during training, showing a feature learning behavior. By extending to deeper parameterization, we further show how extra depth enhances adaptability and generalization. This study combines the insights from feature learning and implicit regularization and provides new perspective into the adaptivity and generalization potential of neural networks beyond the kernel regime.

Comments: arXiv admin note: text overlap with arXiv:2409.00894
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2107.02658 [cs.LG] (Published 2021-07-06)
On Generalization of Graph Autoencoders with Adversarial Training
arXiv:1205.2629 [cs.LG] (Published 2012-05-09)
Interpretation and Generalization of Score Matching
arXiv:1904.08129 [cs.LG] (Published 2019-04-17)
Rogue-Gym: A New Challenge for Generalization in Reinforcement Learning