arXiv Analytics

Sign in

arXiv:1801.00905 [cs.LG]AbstractReferencesReviewsResources

Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space

Mayank Singh, Abhishek Sinha, Balaji Krishnamurthy

Published 2018-01-03Version 1

Recently, Neural networks have seen a huge surge in its adoption due to their ability to provide high accuracy on various tasks. On the other hand, the existence of adversarial examples have raised suspicions regarding the generalization capabilities of neural networks. In this work, we focus on the weight matrix learnt by the neural networks and hypothesize that ill conditioned weight matrix is one of the contributing factors in neural network's susceptibility towards adversarial examples. For ensuring that the learnt weight matrix's condition number remains sufficiently low, we suggest using orthogonal regularizer. We show that this indeed helps in increasing the adversarial accuracy on MNIST and F-MNIST datasets.

Related articles: Most relevant | Search more
arXiv:1806.01547 [cs.LG] (Published 2018-06-05)
ClusterNet : Semi-Supervised Clustering using Neural Networks
arXiv:1805.09370 [cs.LG] (Published 2018-05-23)
Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients
arXiv:1904.01399 [cs.LG] (Published 2019-04-02)
On Geometric Structure of Activation Spaces in Neural Networks