arXiv Analytics

Sign in

arXiv:2006.16427 [cs.LG]AbstractReferencesReviewsResources

Biologically Inspired Mechanisms for Adversarial Robustness

Manish V. Reddy, Andrzej Banburski, Nishka Pant, Tomaso Poggio

Published 2020-06-29Version 1

A convolutional neural network strongly robust to adversarial perturbations at reasonable computational and performance cost has not yet been demonstrated. The primate visual ventral stream seems to be robust to small perturbations in visual stimuli but the underlying mechanisms that give rise to this robust perception are not understood. In this work, we investigate the role of two biologically plausible mechanisms in adversarial robustness. We demonstrate that the non-uniform sampling performed by the primate retina and the presence of multiple receptive fields with a range of receptive field sizes at each eccentricity improve the robustness of neural networks to small adversarial perturbations. We verify that these two mechanisms do not suffer from gradient obfuscation and study their contribution to adversarial robustness through ablation studies.

Related articles: Most relevant | Search more
arXiv:2102.08868 [cs.LG] (Published 2021-02-17)
Bridging the Gap Between Adversarial Robustness and Optimization Bias
arXiv:2003.09461 [cs.LG] (Published 2020-03-20)
Adversarial Robustness on In- and Out-Distribution Improves Explainability
arXiv:1910.10679 [cs.LG] (Published 2019-10-23)
A Useful Taxonomy for Adversarial Robustness of Neural Networks