arXiv Analytics

Sign in

arXiv:2202.10627 [cs.LG]AbstractReferencesReviewsResources

On the Effectiveness of Adversarial Training against Backdoor Attacks

Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Published 2022-02-22Version 1

DNNs' demand for massive data forces practitioners to collect data from the Internet without careful check due to the unacceptable cost, which brings potential risks of backdoor attacks. A backdoored model always predicts a target class in the presence of a predefined trigger pattern, which can be easily realized via poisoning a small amount of data. In general, adversarial training is believed to defend against backdoor attacks since it helps models to keep their prediction unchanged even if we perturb the input image (as long as within a feasible range). Unfortunately, few previous studies succeed in doing so. To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters. For instance, adversarial training with spatial adversarial examples provides notable robustness against commonly-used patch-based backdoor attacks. We further propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.

Related articles: Most relevant | Search more
arXiv:2105.12508 [cs.LG] (Published 2021-05-26)
Adversarial robustness against multiple $l_p$-threat models at the price of one and how to quickly fine-tune robust models to another threat model
arXiv:0910.2540 [cs.LG] (Published 2009-10-14)
Effectiveness and Limitations of Statistical Spam Filters
arXiv:1909.04778 [cs.LG] (Published 2019-09-10)
Effectiveness of Adversarial Examples and Defenses for Malware Classification