arXiv Analytics

Sign in

arXiv:1806.03316 [cs.LG]AbstractReferencesReviewsResources

Adversarial Meta-Learning

Chengxiang Yin, Jian Tang, Zhiyuan Xu, Yanzhi Wang

Published 2018-06-08Version 1

Meta-learning enables a model to learn from very limited data to undertake a new task. In this paper, we study the general meta-learning with adversarial samples. We present a meta-learning algorithm, ADML (ADversarial Meta-Learner), which leverages clean and adversarial samples to optimize the initialization of a learning model in an adversarial manner. ADML leads to the following desirable properties: 1) it turns out to be very effective even in the cases with only clean samples; 2) it is model-agnostic, i.e., it is compatible with any learning model that can be trained with gradient descent; and most importantly, 3) it is robust to adversarial samples, i.e., unlike other meta-learning methods, it only leads to a minor performance degradation when there are adversarial samples. We show via extensive experiments that ADML delivers the state-of-the-art performance on two widely-used image datasets, MiniImageNet and CIFAR100, in terms of both accuracy and robustness.

Comments: 10 pages, this paper was submitted to NIPS 2018
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1805.12017 [cs.LG] (Published 2018-05-30)
Counterstrike: Defending Deep Learning Architectures Against Adversarial Samples by Langevin Dynamics with Supervised Denoising Autoencoder
arXiv:1901.08121 [cs.LG] (Published 2019-01-23)
Sitatapatra: Blocking the Transfer of Adversarial Samples
arXiv:2107.04827 [cs.LG] (Published 2021-07-10)
Identifying Layers Susceptible to Adversarial Attacks