arXiv Analytics

Sign in

arXiv:2005.13815 [cs.LG]AbstractReferencesReviewsResources

Adversarial Classification via Distributional Robustness with Wasserstein Ambiguity

Nam Ho-Nguyen, Stephen J. Wright

Published 2020-05-28Version 1

We study a model for adversarial classification based on distributionally robust chance constraints. We show that under Wasserstein ambiguity, the model aims to minimize the conditional value-at-risk of the distance to misclassification, and we explore links to previous adversarial classification models and maximum margin classifiers. We also provide a reformulation of the distributionally robust model for linear classifiers, and show it is equivalent to minimizing a regularized ramp loss. Numerical experiments show that, despite the nonconvexity, standard descent methods appear to converge to the global minimizer for this problem. Inspired by this observation, we show that, for a certain benign distribution, the regularized ramp loss minimization problem has a single stationary point, at the global minimizer.

Related articles: Most relevant | Search more
arXiv:2410.10533 [cs.LG] (Published 2024-10-14)
Non-convergence to global minimizers in data driven supervised deep learning: Adam and stochastic gradient descent optimization provably fail to converge to global minimizers in the training of deep neural networks with ReLU activation
arXiv:2001.11988 [cs.LG] (Published 2020-01-31)
Consensus-based Optimization on the Sphere II: Convergence to Global Minimizers and Machine Learning
arXiv:2011.14126 [cs.LG] (Published 2020-11-28)
Risk-Monotonicity via Distributional Robustness