arXiv:1905.11213 [cs.LG]AbstractReferencesReviewsResources
Provable robustness against all adversarial $l_p$-perturbations for $p\geq 1$
Francesco Croce, Matthias Hein
Published 2019-05-27Version 1
In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific $l_p$-perturbation models have been developed, they are still vulnerable to other $l_q$-perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt $l_1$- and $l_\infty$-perturbations and show how that leads to provably robust models wrt any $l_p$-norm for $p\geq 1$.
Related articles: Most relevant | Search more
arXiv:2105.14710 [cs.LG] (Published 2021-05-31)
Robustifying $\ell_\infty$ Adversarial Training to the Union of Perturbation Models
arXiv:2406.13073 [cs.LG] (Published 2024-06-18)
NoiSec: Harnessing Noise for Security against Adversarial and Backdoor Attacks
arXiv:2004.09179 [cs.LG] (Published 2020-04-20)
GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples