arXiv:1905.06455 [cs.LG]AbstractReferencesReviewsResources
On Norm-Agnostic Robustness of Adversarial Training
Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin
Published 2019-05-15Version 1
Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in $\ell_2$ and $\ell_\infty$ norms simultaneously. We discuss a possible solution to this issue and its limitations as well.
Comments: 4 pages, 2 figures, presented at the ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning. arXiv admin note: text overlap with arXiv:1809.03113
Related articles: Most relevant | Search more
arXiv:1911.06479 [cs.LG] (Published 2019-11-15)
On Model Robustness Against Adversarial Examples
arXiv:1804.07757 [cs.LG] (Published 2018-04-20)
Learning More Robust Features with Adversarial Training
arXiv:1903.08778 [cs.LG] (Published 2019-03-20)
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes