arXiv Analytics

Sign in

arXiv:1707.03631 [cs.LG]AbstractReferencesReviewsResources

Adversarial Dropout for Supervised and Semi-supervised Learning

Sungrae Park, Jun-Keon Park, Su-Jin Shin, Il-Chul Moon

Published 2017-07-12Version 1

Recently, the training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has been proved to improve generalization performance of neural networks. In contrast to the individually biased inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of dropouts that maximize the divergence between the outputs from the network with the dropouts and the training supervisions. The identified adversarial dropout are used to reconfigure the neural network to train, and we demonstrated that training on the reconfigured sub-network improves the generalization performance of supervised and semi-supervised learning tasks on MNIST and CIFAR-10. We analyzed the trained model to reason the performance improvement, and we found that adversarial dropout increases the sparsity of neural networks more than the standard dropout does.

Comments: To submit a CS conference Keyword : Adversarial training, Artificial Intelligence, Neural Network
Categories: cs.LG, cs.CV
Related articles: Most relevant | Search more
arXiv:1901.10513 [cs.LG] (Published 2019-01-29)
Adversarial Examples Are a Natural Consequence of Test Error in Noise
arXiv:2002.08859 [cs.LG] (Published 2020-02-20)
A Bayes-Optimal View on Adversarial Examples
arXiv:2002.04599 [cs.LG] (Published 2020-02-11)
Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations