arXiv Analytics

Sign in

arXiv:1803.05787 [cs.CV]AbstractReferencesReviewsResources

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, Wujie Wen

Published 2018-03-14Version 1

Deep Neural Networks (DNNs) have achieved remarkable performance in a myriad of realistic applications. However, recent studies show that well-trained DNNs can be easily misled by adversarial examples (AE) -- the maliciously crafted inputs by introducing small and imperceptible input perturbations. Existing mitigation solutions, such as adversarial training and defensive distillation, suffer from expensive retraining cost and demonstrate marginal robustness improvement against the state-of-the-art attacks like CW family adversarial examples. In this work, we propose a novel low-cost "feature distillation" strategy to purify the adversarial input perturbations of AEs by redesigning the popular image compression framework "JPEG". The proposed "feature distillation" wisely maximizes the malicious feature loss of AE perturbations during image compression while suppressing the distortions of benign features essential for high accurate DNN classification. Experimental results show that our method can drastically reduce the success rate of various state-of-the-art AE attacks by ~60% on average for both CIFAR-10 and ImageNet benchmarks without harming the testing accuracy, outperforming existing solutions like default JPEG compression and "feature squeezing".

Comments: 27th International Joint Conference on Artificial Intelligence (IJCAI-18)
Categories: cs.CV, cs.CR
Related articles: Most relevant | Search more
arXiv:1804.08529 [cs.CV] (Published 2018-04-23)
VectorDefense: Vectorization as a Defense to Adversarial Examples
arXiv:1812.10217 [cs.CV] (Published 2018-12-26)
Practical Adversarial Attack Against Object Detector
arXiv:1911.11946 [cs.CV] (Published 2019-11-27)
Can Attention Masks Improve Adversarial Robustness?