arXiv Analytics

Sign in

arXiv:2003.09372 [cs.LG]AbstractReferencesReviewsResources

One Neuron to Fool Them All

Anshuman Suri, David Evans

Published 2020-03-20Version 1

Despite vast research in adversarial examples, the root causes of model susceptibility are not well understood. Instead of looking at attack-specific robustness, we propose a notion that evaluates the sensitivity of individual neurons in terms of how robust the model's output is to direct perturbations of that neuron's output. Analyzing models from this perspective reveals distinctive characteristics of standard as well as adversarially-trained robust models, and leads to several curious results. In our experiments on CIFAR-10 and ImageNet, we find that attacks using a loss function that targets just a single sensitive neuron find adversarial examples nearly as effectively as ones that target the full model. We analyze the properties of these sensitive neurons to propose a regularization term that can help a model achieve robustness to a variety of different perturbation constraints while maintaining accuracy on natural data distributions. Code for all our experiments is available at https://github.com/iamgroot42/sauron .

Related articles: Most relevant | Search more
arXiv:1807.00051 [cs.LG] (Published 2018-06-29)
Adversarial Examples in Deep Learning: Characterization and Divergence
arXiv:1902.01235 [cs.LG] (Published 2019-02-01)
Robustness Certificates Against Adversarial Examples for ReLU Networks
arXiv:1903.02380 [cs.LG] (Published 2019-03-06)
Detecting Overfitting via Adversarial Examples