arXiv Analytics

Sign in

arXiv:1707.04131 [cs.LG]AbstractReferencesReviewsResources

Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models

Jonas Rauber, Wieland Brendel, Matthias Bethge

Published 2017-07-13Version 1

Even todays most advanced machine learning models are easily fooled by almost imperceptible perturbations of their inputs. Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models. It is build around the idea that the most comparable robustness measure is the minimum perturbation needed to craft an adversarial example. To this end, Foolbox provides reference implementations of most published adversarial attack methods alongside some new ones, all of which perform internal hyperparameter tuning to find the minimum adversarial perturbation. Additionally, Foolbox interfaces with most popular deep learning frameworks such as PyTorch, Keras, TensorFlow, Theano and MXNet, provides a straight forward way to add support for other frameworks and allows different adversarial criteria such as targeted misclassification and top-k misclassification as well as different distance measures. The code is licensed under the MIT license and is openly available at https://github.com/bethgelab/foolbox.

Comments: Code and examples available at https://github.com/bethgelab/foolbox and documentation available at http://foolbox.readthedocs.io/
Categories: cs.LG, cs.CR, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:2010.15391 [cs.LG] (Published 2020-10-29)
Robustifying Binary Classification to Adversarial Perturbation
arXiv:2010.00821 [cs.LG] (Published 2020-10-02)
Explainable Online Validation of Machine Learning Models for Practical Applications
arXiv:1705.03387 [cs.LG] (Published 2017-05-09)
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN