arXiv Analytics

Sign in

arXiv:2007.05573 [cs.CV]AbstractReferencesReviewsResources

Improved Detection of Adversarial Images Using Deep Neural Networks

Yutong Gao, Yi Pan

Published 2020-07-10Version 1

Machine learning techniques are immensely deployed in both industry and academy. Recent studies indicate that machine learning models used for classification tasks are vulnerable to adversarial examples, which limits the usage of applications in the fields with high precision requirements. We propose a new approach called Feature Map Denoising to detect the adversarial inputs and show the performance of detection on the mixed dataset consisting of adversarial examples generated by different attack algorithms, which can be used to associate with any pre-trained DNNs at a low cost. Wiener filter is also introduced as the denoise algorithm to the defense model, which can further improve performance. Experimental results indicate that good accuracy of detecting the adversarial examples can be achieved through our Feature Map Denoising algorithm.

Related articles: Most relevant | Search more
arXiv:1911.11946 [cs.CV] (Published 2019-11-27)
Can Attention Masks Improve Adversarial Robustness?
arXiv:2209.02997 [cs.CV] (Published 2022-09-07)
On the Transferability of Adversarial Examples between Encrypted Models
arXiv:1803.05787 [cs.CV] (Published 2018-03-14)
Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples