arXiv Analytics

Sign in

arXiv:2101.04321 [cs.CV]AbstractReferencesReviewsResources

Random Transformation of Image Brightness for Adversarial Attack

Bo Yang, Kaiyong Xu, Hengjun Wang, Hengwei Zhang

Published 2021-01-12Version 1

Deep neural networks are vulnerable to adversarial examples, which are crafted by adding small, human-imperceptible perturbations to the original images, but make the model output inaccurate predictions. Before deep neural networks are deployed, adversarial attacks can thus be an important method to evaluate and select robust models in safety-critical applications. However, under the challenging black-box setting, the attack success rate, i.e., the transferability of adversarial examples, still needs to be improved. Based on image augmentation methods, we found that random transformation of image brightness can eliminate overfitting in the generation of adversarial examples and improve their transferability. To this end, we propose an adversarial example generation method based on this phenomenon, which can be integrated with Fast Gradient Sign Method (FGSM)-related methods to build a more robust gradient-based attack and generate adversarial examples with better transferability. Extensive experiments on the ImageNet dataset demonstrate the method's effectiveness. Whether on normally or adversarially trained networks, our method has a higher success rate for black-box attacks than other attack methods based on data augmentation. We hope that this method can help to evaluate and improve the robustness of models.

Related articles: Most relevant | Search more
arXiv:2011.12680 [cs.CV] (Published 2020-11-25)
Adversarial Attack on Facial Recognition using Visible Light
arXiv:2210.05968 [cs.CV] (Published 2022-10-12)
Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
arXiv:1502.02445 [cs.CV] (Published 2015-02-09)
Deep Neural Networks for Anatomical Brain Segmentation