arXiv:2007.01507 [cs.LG]AbstractReferencesReviewsResources
Towards Robust Deep Learning with Ensemble Networks and Noisy Layers
Published 2020-07-03Version 1
In this paper we provide an approach for deep learning that protects against adversarial examples in image classification-type networks. The approach relies on two mechanisms:1) a mechanism that increases robustness at the expense of accuracy, and, 2) a mechanism that improves accuracy but does not always increase robustness. We show that an approach combining the two mechanisms can provide protection against adversarial examples while retaining accuracy. We formulate potential attacks on our approach and provide experimental results to demonstrate the effectiveness of our approach.
Related articles: Most relevant | Search more
arXiv:1903.08778 [cs.LG] (Published 2019-03-20)
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
arXiv:1906.07982 [cs.LG] (Published 2019-06-19)
A unified view on differential privacy and robustness to adversarial examples
arXiv:1901.10861 [cs.LG] (Published 2019-01-30)
A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance