arXiv Analytics

Sign in

arXiv:2009.08233 [cs.CV]AbstractReferencesReviewsResources

Label Smoothing and Adversarial Robustness

Chaohao Fu, Hongbin Chen, Na Ruan, Weijia Jia

Published 2020-09-17Version 1

Recent studies indicate that current adversarial attack methods are flawed and easy to fail when encountering some deliberately designed defense. Sometimes even a slight modification in the model details will invalidate the attack. We find that training model with label smoothing can easily achieve striking accuracy under most gradient-based attacks. For instance, the robust accuracy of a WideResNet model trained with label smoothing on CIFAR-10 achieves 75% at most under PGD attack. To understand the reason underlying the subtle robustness, we investigate the relationship between label smoothing and adversarial robustness. Through theoretical analysis about the characteristics of the network trained with label smoothing and experiment verification of its performance under various attacks. We demonstrate that the robustness produced by label smoothing is incomplete based on the fact that its defense effect is volatile, and it cannot defend attacks transferred from a naturally trained model. Our study enlightens the research community to rethink how to evaluate the model's robustness appropriately.

Related articles: Most relevant | Search more
arXiv:2209.06953 [cs.CV] (Published 2022-09-14)
On the interplay of adversarial robustness and architecture components: patches, convolution and attention
arXiv:1708.01697 [cs.CV] (Published 2017-08-05)
Adversarial Robustness: Softmax versus Openmax
arXiv:2212.11511 [cs.CV] (Published 2022-12-22)
Confidence-Aware Paced-Curriculum Learning by Label Smoothing for Surgical Scene Understanding