arXiv Analytics

Sign in

arXiv:2003.09461 [cs.LG]AbstractReferencesReviewsResources

Adversarial Robustness on In- and Out-Distribution Improves Explainability

Maximilian Augustin, Alexander Meinke, Matthias Hein

Published 2020-03-20Version 1

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions. In this work we propose RATIO, a training procedure for Robustness via Adversarial Training on In- and Out-distribution, which leads to robust models with reliable and robust confidence estimates on the out-distribution. RATIO has similar generative properties to adversarial training so that visual counterfactuals produce class specific features. While adversarial training comes at the price of lower clean accuracy, RATIO achieves state-of-the-art $l_2$-adversarial robustness on CIFAR10 and maintains better clean accuracy.

Related articles: Most relevant | Search more
arXiv:2006.16427 [cs.LG] (Published 2020-06-29)
Biologically Inspired Mechanisms for Adversarial Robustness
arXiv:2102.08868 [cs.LG] (Published 2021-02-17)
Bridging the Gap Between Adversarial Robustness and Optimization Bias
arXiv:1910.10679 [cs.LG] (Published 2019-10-23)
A Useful Taxonomy for Adversarial Robustness of Neural Networks