arXiv Analytics

Sign in

arXiv:2008.04094 [cs.CV]AbstractReferencesReviewsResources

Adversarial Examples on Object Recognition: A Comprehensive Survey

Alex Serban, Erik Poll, Joost Visser

Published 2020-08-07Version 1

Deep neural networks are at the forefront of machine learning research. However, despite achieving impressive performance on complex tasks, they can be very sensitive: Small perturbations of inputs can be sufficient to induce incorrect behavior. Such perturbations, called adversarial examples, are intentionally designed to test the network's sensitivity to distribution drifts. Given their surprisingly small size, a wide body of literature conjectures on their existence and how this phenomenon can be mitigated. In this article we discuss the impact of adversarial examples on security, safety, and robustness of neural networks. We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning models. Altogether, the goal is to provide a comprehensive and self-contained survey of this growing field of research.

Comments: Published in ACM CSUR. arXiv admin note: text overlap with arXiv:1810.01185
Categories: cs.CV, cs.LG
Related articles: Most relevant | Search more
arXiv:2106.03323 [cs.CV] (Published 2021-06-07)
A Comprehensive Survey on Image Dehazing Based on Deep Learning
arXiv:2112.11699 [cs.CV] (Published 2021-12-22, updated 2022-09-15)
Few-Shot Object Detection: A Comprehensive Survey
arXiv:2308.09388 [cs.CV] (Published 2023-08-18)
Diffusion Models for Image Restoration and Enhancement -- A Comprehensive Survey
Xin Li et al.