arXiv Analytics

Sign in

arXiv:2010.02558 [stat.ML]AbstractReferencesReviewsResources

Constraining Logits by Bounded Function for Adversarial Robustness

Sekitoshi Kanai, Masanori Yamada, Shin'ya Yamaguchi, Hiroshi Takahashi, Yasutoshi Ida

Published 2020-10-06Version 1

We propose a method for improving adversarial robustness by addition of a new bounded function just before softmax. Recent studies hypothesize that small logits (inputs of softmax) by logit regularization can improve adversarial robustness of deep learning. Following this hypothesis, we analyze norms of logit vectors at the optimal point under the assumption of universal approximation and explore new methods for constraining logits by addition of a bounded function before softmax. We theoretically and empirically reveal that small logits by addition of a common activation function, e.g., hyperbolic tangent, do not improve adversarial robustness since input vectors of the function (pre-logit vectors) can have large norms. From the theoretical findings, we develop the new bounded function. The addition of our function improves adversarial robustness because it makes logit and pre-logit vectors have small norms. Since our method only adds one activation function before softmax, it is easy to combine our method with adversarial training. Our experiments demonstrate that our method is comparable to logit regularization methods in terms of accuracies on adversarially perturbed datasets without adversarial training. Furthermore, it is superior or comparable to logit regularization methods and a recent defense method (TRADES) when using adversarial training.

Related articles: Most relevant | Search more
arXiv:2304.06326 [stat.ML] (Published 2023-04-13)
Understanding Overfitting in Adversarial Training in Kernel Regression
arXiv:2502.01027 [stat.ML] (Published 2025-02-03)
Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees
arXiv:2410.16073 [stat.ML] (Published 2024-10-21)
On the Geometry of Regularization in Adversarial Training: High-Dimensional Asymptotics and Generalization Bounds