arXiv:1502.02590 [cs.LG]AbstractReferencesReviewsResources
Analysis of classifiers' robustness to adversarial perturbations
Alhussein Fawzi, Omar Fawzi, Pascal Frossard
Published 2015-02-09Version 1
The robustness of a classifier to arbitrary small perturbations of the datapoints is a highly desirable property when the classifier is deployed in real and possibly hostile environments. In this paper, we propose a theoretical framework for analyzing the robustness of classifiers to adversarial perturbations, and study two common families of classifiers. In both cases, we show the existence of a fundamental limit on the robustness to adversarial perturbations, which is expressed in terms of a distinguishability measure between the classes. Our result implies that in tasks involving small distinguishability, no classifier will be robust to adversarial perturbations, even if a good accuracy is achieved. Furthermore, we show that robustness to random noise does not imply, in general, robustness to adversarial perturbations. In fact, in high dimensional problems, linear classifiers are shown to be much more robust to random noise than to adversarial perturbations. Our analysis is complemented by experimental results on controlled and real-world data. Up to our knowledge, this is the first theoretical work that addresses the surprising phenomenon of adversarial instability recently observed for deep networks (Szegedy et al., 2014). Our work shows that this phenomenon is not limited to deep networks, and gives a theoretical explanation of the causes underlying the adversarial instability of classifiers.