arXiv Analytics

Sign in

arXiv:1806.04646 [cs.CV]AbstractReferencesReviewsResources

Adversarial Attacks on Variational Autoencoders

George Gondim-Ribeiro, Pedro Tabacof, Eduardo Valle

Published 2018-06-12Version 1

Adversarial attacks are malicious inputs that derail machine-learning models. We propose a scheme to attack autoencoders, as well as a quantitative evaluation framework that correlates well with the qualitative assessment of the attacks. We assess --- with statistically validated experiments --- the resistance to attacks of three variational autoencoders (simple, convolutional, and DRAW) in three datasets (MNIST, SVHN, CelebA), showing that both DRAW's recurrence and attention mechanism lead to better resistance. As autoencoders are proposed for compressing data --- a scenario in which their safety is paramount --- we expect more attention will be given to adversarial attacks on them.

Related articles: Most relevant | Search more
arXiv:1712.02950 [cs.CV] (Published 2017-12-08)
CycleGAN: a Master of Steganography
arXiv:2002.11881 [cs.CV] (Published 2020-02-27)
Defense-PointNet: Protecting PointNet Against Adversarial Attacks
arXiv:2007.08716 [cs.CV] (Published 2020-07-17)
Understanding and Diagnosing Vulnerability under Adversarial Attacks