arXiv Analytics

Sign in

arXiv:1807.09705 [cs.LG]AbstractReferencesReviewsResources

Limitations of the Lipschitz constant as a defense against adversarial examples

Todd Huster, Cho-Yu Jason Chiang, Ritu Chadha

Published 2018-07-25Version 1

Several recent papers have discussed utilizing Lipschitz constants to limit the susceptibility of neural networks to adversarial examples. We analyze recently proposed methods for computing the Lipschitz constant. We show that the Lipschitz constant may indeed enable adversarially robust neural networks. However, the methods currently employed for computing it suffer from theoretical and practical limitations. We argue that addressing this shortcoming is a promising direction for future research into certified adversarial defenses.

Related articles: Most relevant | Search more
arXiv:2003.09372 [cs.LG] (Published 2020-03-20)
One Neuron to Fool Them All
arXiv:1902.06044 [cs.LG] (Published 2019-02-16)
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness
arXiv:1902.01235 [cs.LG] (Published 2019-02-01)
Robustness Certificates Against Adversarial Examples for ReLU Networks