arXiv Analytics

Sign in

arXiv:2007.11709 [cs.CV]AbstractReferencesReviewsResources

Threat of Adversarial Attacks on Face Recognition: A Comprehensive Survey

Fatemeh Vakhshiteh, Raghavendra Ramachandra, Ahmad Nickabadi

Published 2020-07-22Version 1

Face recognition (FR) systems have demonstrated outstanding verification performance, suggesting suitability for real-world applications, ranging from photo tagging in social media to automated border control (ABC). In an advanced FR system with deep learning-based architecture, however, promoting the recognition efficiency alone is not sufficient and the system should also withstand potential kinds of attacks designed to target its proficiency. Recent studies show that (deep) FR systems exhibit an intriguing vulnerability to imperceptible or perceptible but natural-looking adversarial input images that drive the model to incorrect output predictions. In this article, we present a comprehensive survey on adversarial attacks against FR systems and elaborate on the competence of new countermeasures against them. Further, we propose a taxonomy of existing attack and defense strategies according to different criteria. Finally, we compare the presented approaches according to techniques' characteristics.

Related articles: Most relevant | Search more
arXiv:2103.14222 [cs.CV] (Published 2021-03-26)
Adversarial Attacks are Reversible with Natural Supervision
arXiv:2002.11881 [cs.CV] (Published 2020-02-27)
Defense-PointNet: Protecting PointNet Against Adversarial Attacks
arXiv:2108.00401 [cs.CV] (Published 2021-08-01)
Threat of Adversarial Attacks on Deep Learning in Computer Vision: Survey II