arXiv Analytics

Sign in

arXiv:2209.02997 [cs.CV]AbstractReferencesReviewsResources

On the Transferability of Adversarial Examples between Encrypted Models

Miki Tanaka, Isao Echizen, Hitoshi Kiya

Published 2022-09-07Version 1

Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, namely, AEs generated for a source model fool other (target) models. In this paper, we investigate the transferability of models encrypted for adversarially robust defense for the first time. To objectively verify the property of transferability, the robustness of models is evaluated by using a benchmark attack method, called AutoAttack. In an image-classification experiment, the use of encrypted models is confirmed not only to be robust against AEs but to also reduce the influence of AEs in terms of the transferability of models.

Comments: to be appear in ISPACS 2022
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2007.05573 [cs.CV] (Published 2020-07-10)
Improved Detection of Adversarial Images Using Deep Neural Networks
arXiv:1911.11946 [cs.CV] (Published 2019-11-27)
Can Attention Masks Improve Adversarial Robustness?
arXiv:2210.16117 [cs.CV] (Published 2022-10-28)
Improving Transferability of Adversarial Examples on Face Recognition with Beneficial Perturbation Feature Augmentation