arXiv Analytics

Sign in

arXiv:2205.15130 [cs.LG]AbstractReferencesReviewsResources

Why Adversarial Training of ReLU Networks Is Difficult?

Xu Cheng, Hao Zhang, Yue Xin, Wen Shen, Jie Ren, Quanshi Zhang

Published 2022-05-30Version 1

This paper mathematically derives an analytic solution of the adversarial perturbation on a ReLU network, and theoretically explains the difficulty of adversarial training. Specifically, we formulate the dynamics of the adversarial perturbation generated by the multi-step attack, which shows that the adversarial perturbation tends to strengthen eigenvectors corresponding to a few top-ranked eigenvalues of the Hessian matrix of the loss w.r.t. the input. We also prove that adversarial training tends to strengthen the influence of unconfident input samples with large gradient norms in an exponential manner. Besides, we find that adversarial training strengthens the influence of the Hessian matrix of the loss w.r.t. network parameters, which makes the adversarial training more likely to oscillate along directions of a few samples, and boosts the difficulty of adversarial training. Crucially, our proofs provide a unified explanation for previous findings in understanding adversarial training.

Related articles: Most relevant | Search more
arXiv:2106.01606 [cs.LG] (Published 2021-06-03)
Exploring Memorization in Adversarial Training
arXiv:2008.03364 [cs.LG] (Published 2020-08-07)
Improving the Speed and Quality of GAN by Adversarial Training
arXiv:2205.01663 [cs.LG] (Published 2022-05-03)
Adversarial Training for High-Stakes Reliability