arXiv:2008.06081 [cs.LG]AbstractReferencesReviewsResources
Adversarial Training and Provable Robustness: A Tale of Two Objectives
Published 2020-08-13Version 1
We propose a principled framework that combines adversarial training and provable robustness verification for training certifiably robust neural networks. We formulate the training problem as a joint optimization problem with both empirical and provable robustness objectives and develop a novel gradient-descent technique that can eliminate bias in stochastic multi-gradients. We perform both theoretical analysis on the convergence of the proposed technique and experimental comparison with state-of-the-arts. Results on MNIST and CIFAR-10 show that our method can match or outperform prior approaches for provable l infinity robustness.
Comments: 16 pages
Related articles: Most relevant | Search more
arXiv:1910.04279 [cs.LG] (Published 2019-10-09)
Adversarial Training: embedding adversarial perturbations into the parameter space of a neural network to build a robust system
arXiv:1611.03383 [cs.LG] (Published 2016-11-10)
Disentangling factors of variation in deep representations using adversarial training
arXiv:2006.00387 [cs.LG] (Published 2020-05-30)
Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training