arXiv Analytics

Sign in

arXiv:2301.10454 [cs.LG]AbstractReferencesReviewsResources

A Data-Centric Approach for Improving Adversarial Training Through the Lens of Out-of-Distribution Detection

Mohammad Azizmalayeri, Arman Zarei, Alireza Isavand, Mohammad Taghi Manzuri, Mohammad Hossein Rohban

Published 2023-01-25Version 1

Current machine learning models achieve super-human performance in many real-world applications. Still, they are susceptible against imperceptible adversarial perturbations. The most effective solution for this problem is adversarial training that trains the model with adversarially perturbed samples instead of original ones. Various methods have been developed over recent years to improve adversarial training such as data augmentation or modifying training attacks. In this work, we examine the same problem from a new data-centric perspective. For this purpose, we first demonstrate that the existing model-based methods can be equivalent to applying smaller perturbation or optimization weights to the hard training examples. By using this finding, we propose detecting and removing these hard samples directly from the training procedure rather than applying complicated algorithms to mitigate their effects. For detection, we use maximum softmax probability as an effective method in out-of-distribution detection since we can consider the hard samples as the out-of-distribution samples for the whole data distribution. Our results on SVHN and CIFAR-10 datasets show the effectiveness of this method in improving the adversarial training without adding too much computational cost.

Related articles: Most relevant | Search more
arXiv:2311.13959 [cs.LG] (Published 2023-11-23)
RankFeat\&RankWeight: Rank-1 Feature/Weight Removal for Out-of-distribution Detection
arXiv:2209.08590 [cs.LG] (Published 2022-09-18)
RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection
arXiv:1904.12220 [cs.LG] (Published 2019-04-27)
Analysis of Confident-Classifiers for Out-of-distribution Detection