arXiv Analytics

Sign in

arXiv:1804.08529 [cs.CV]AbstractReferencesReviewsResources

VectorDefense: Vectorization as a Defense to Adversarial Examples

Vishaal Munusamy Kabilan, Brandon Morris, Anh Nguyen

Published 2018-04-23Version 1

Training deep neural networks on images represented as grids of pixels has brought to light an interesting phenomenon known as adversarial examples. Inspired by how humans reconstruct abstract concepts, we attempt to codify the input bitmap image into a set of compact, interpretable elements to avoid being fooled by the adversarial structures. We take the first step in this direction by experimenting with image vectorization as an input transformation step to map the adversarial examples back into the natural manifold of MNIST handwritten digits. We compare our method vs. state-of-the-art input transformations and further discuss the trade-offs between a hand-designed and a learned transformation defense.

Related articles: Most relevant | Search more
arXiv:2001.03460 [cs.CV] (Published 2020-01-08)
Cloud-based Image Classification Service Is Not Robust To Adversarial Examples: A Forgotten Battlefield
arXiv:1911.11946 [cs.CV] (Published 2019-11-27)
Can Attention Masks Improve Adversarial Robustness?
arXiv:2001.00116 [cs.CV] (Published 2020-01-01)
Erase and Restore: Simple, Accurate and Resilient Detection of $L_2$ Adversarial Examples