arXiv Analytics

Sign in

arXiv:1711.08875 [cs.CV]AbstractReferencesReviewsResources

Wasserstein Introspective Neural Networks

Kwonjoon Lee, Weijian Xu, Fan Fan, Zhuowen Tu

Published 2017-11-24Version 1

We present Wasserstein introspective neural networks (WINN) that are both a generator and a discriminator within a single model. WINN provides a significant improvement over the recent introspective neural networks (INN) method by enhancing INN's generative modeling capability. WINN has three interesting properties: (1) A mathematical connection between the formulation of Wasserstein generative adversarial networks (WGAN) and the INN algorithm is made; (2) The explicit adoption of the WGAN term into INN results in a large enhancement to INN, achieving compelling results even with a single classifier on e.g., providing a 20 times reduction in model size over INN within texture modeling; (3) When applied to supervised classification, WINN also gives rise to greater robustness with an $88\%$ reduction of errors against adversarial examples -- improved over the result of $39\%$ by an INN-family algorithm. In the experiments, we report encouraging results on unsupervised learning problems including texture, face, and object modeling, as well as a supervised classification task against adversarial attack.

Related articles: Most relevant | Search more
arXiv:1911.11946 [cs.CV] (Published 2019-11-27)
Can Attention Masks Improve Adversarial Robustness?
arXiv:2012.00567 [cs.CV] (Published 2020-12-01)
Improving the Transferability of Adversarial Examples with the Adam Optimizer
arXiv:1803.05787 [cs.CV] (Published 2018-03-14)
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples