arXiv Analytics

Sign in

arXiv:1712.00699 [cs.LG]AbstractReferencesReviewsResources

Improving Network Robustness against Adversarial Attacks with Compact Convolution

Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo, Rama Chellappa

Published 2017-12-03Version 1

Though Convolutional Neural Networks (CNNs) have surpassed human-level performance on tasks such as object classification and face verification, they can easily be fooled by adversarial attacks. These attacks add a small perturbation to the input image that causes the network to mis-classify the sample. In this paper, we focus on neutralizing adversarial attacks by exploring the effect of different loss functions such as CenterLoss and L2-Softmax Loss for enhanced robustness to adversarial perturbations. Additionally, we propose power convolution, a novel method of convolution that when incorporated in conventional CNNs improve their robustness. Power convolution ensures that features at every layer are bounded and close to each other. Extensive experiments show that Power Convolutional Networks (PCNs) neutralize multiple types of attacks, and perform better than existing methods for defending adversarial attacks.

Related articles: Most relevant | Search more
arXiv:1909.08072 [cs.LG] (Published 2019-09-17)
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
arXiv:1702.02284 [cs.LG] (Published 2017-02-08)
Adversarial Attacks on Neural Network Policies
arXiv:2006.15632 [cs.LG] (Published 2020-06-28)
FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based IIoT Applications