arXiv Analytics

Sign in

arXiv:2304.06326 [stat.ML]AbstractReferencesReviewsResources

Understanding Overfitting in Adversarial Training in Kernel Regression

Teng Zhang, Kang Li

Published 2023-04-13Version 1

Adversarial training and data augmentation with noise are widely adopted techniques to enhance the performance of neural networks. This paper investigates adversarial training and data augmentation with noise in the context of regularized regression in a reproducing kernel Hilbert space (RKHS). We establish the limiting formula for these techniques as the attack and noise size, as well as the regularization parameter, tend to zero. Based on this limiting formula, we analyze specific scenarios and demonstrate that, without appropriate regularization, these two methods may have larger generalization error and Lipschitz constant than standard kernel regression. However, by selecting the appropriate regularization parameter, these two methods can outperform standard kernel regression and achieve smaller generalization error and Lipschitz constant. These findings support the empirical observations that adversarial training can lead to overfitting, and appropriate regularization methods, such as early stopping, can alleviate this issue.

Related articles: Most relevant | Search more
arXiv:2410.16073 [stat.ML] (Published 2024-10-21)
On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds
arXiv:2205.09906 [stat.ML] (Published 2022-05-20)
Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome
arXiv:2309.07453 [stat.ML] (Published 2023-09-14)
SC-MAD: Mixtures of Higher-order Networks for Data Augmentation