arXiv Analytics

Sign in

arXiv:2103.02152 [cs.CV]AbstractReferencesReviewsResources

Group-wise Inhibition based Feature Regularization for Robust Classification

Haozhe Liu, Haoqian Wu, Weicheng Xie, Feng Liu, Linlin Shen

Published 2021-03-03Version 1

The vanilla convolutional neural network (CNN) is vulnerable to images with small variations (e.g. corrupted and adversarial samples). One of the possible reasons is that CNN pays more attention to the most discriminative regions, but ignores the auxiliary features, leading to the lack of feature diversity. In our method , we propose to dynamically suppress significant activation values of vanilla CNN by group-wise inhibition, but not fix or randomly handle them when training. Then, the feature maps with different activation distribution are processed separately due to the independence of features. Vanilla CNN is finally guided to learn more rich discriminative features hierarchically for robust classification according to proposed regularization. The proposed method is able to achieve a significant gain of robustness over 15% comparing with the state-of-the-art. We also show that the proposed regularization method complements other defense paradigms, such as adversarial training, to further improve the robustness.

Related articles: Most relevant | Search more
arXiv:2412.04245 [cs.CV] (Published 2024-12-05)
Intriguing Properties of Robust Classification
arXiv:2302.02503 [cs.CV] (Published 2023-02-05)
Leaving Reality to Imagination: Robust Classification via Generated Datasets
arXiv:1805.03438 [cs.CV] (Published 2018-05-09)
Robust Classification with Convolutional Prototype Learning