arXiv Analytics

Sign in

arXiv:1608.08967 [cs.LG]AbstractReferencesReviewsResources

Robustness of classifiers: from adversarial to random noise

Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard

Published 2016-08-31Version 1

Several recent works have shown that state-of-the-art classifiers are vulnerable to worst-case (i.e., adversarial) perturbations of the datapoints. On the other hand, it has been empirically observed that these same classifiers are relatively robust to random noise. In this paper, we propose to study a \textit{semi-random} noise regime that generalizes both the random and worst-case noise regimes. We propose the first quantitative analysis of the robustness of nonlinear classifiers in this general noise regime. We establish precise theoretical bounds on the robustness of classifiers in this general regime, which depend on the curvature of the classifier's decision boundary. Our bounds confirm and quantify the empirical observations that classifiers satisfying curvature constraints are robust to random noise. Moreover, we quantify the robustness of classifiers in terms of the subspace dimension in the semi-random noise regime, and show that our bounds remarkably interpolate between the worst-case and random noise regimes. We perform experiments and show that the derived bounds provide very accurate estimates when applied to various state-of-the-art deep neural networks and datasets. This result suggests bounds on the curvature of the classifiers' decision boundaries that we support experimentally, and more generally offers important insights onto the geometry of high dimensional classification problems.

Related articles: Most relevant | Search more
arXiv:1905.11213 [cs.LG] (Published 2019-05-27)
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$
arXiv:0803.3490 [cs.LG] (Published 2008-03-25, updated 2008-11-11)
Robustness and Regularization of Support Vector Machines
arXiv:1005.2243 [cs.LG] (Published 2010-05-13)
Robustness and Generalization