arXiv Analytics

Sign in

arXiv:1704.01155 [cs.CV]AbstractReferencesReviewsResources

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

Weilin Xu, David Evans, Yanjun Qi

Published 2017-04-04Version 1

Although deep neural networks (DNNs) have achieved great success in many computer vision tasks, recent studies have shown they are vulnerable to adversarial examples. Such examples, typically generated by adding small but purposeful distortions, can frequently fool DNN models. Previous studies to defend against adversarial examples mostly focused on refining the DNN models. They have either shown limited success or suffer from the expensive computation. We propose a new strategy, \emph{feature squeezing}, that can be used to harden DNN models by detecting adversarial examples. Feature squeezing reduces the search space available to an adversary by coalescing samples that correspond to many different feature vectors in the original space into a single sample. By comparing a DNN model's prediction on the original input with that on the squeezed input, feature squeezing detects adversarial examples with high accuracy and few false positives. This paper explores two instances of feature squeezing: reducing the color bit depth of each pixel and smoothing using a spatial filter. These strategies are straightforward, inexpensive, and complementary to defensive methods that operate on the underlying model, such as adversarial training.

Related articles: Most relevant | Search more
arXiv:1804.03928 [cs.CV] (Published 2018-04-11)
Deep Learning For Computer Vision Tasks: A review
arXiv:2104.10972 [cs.CV] (Published 2021-04-22)
ImageNet-21K Pretraining for the Masses
arXiv:2103.09950 [cs.CV] (Published 2021-03-17)
Learning to Resize Images for Computer Vision Tasks