arXiv Analytics

Sign in

arXiv:2303.14173 [cs.LG]AbstractReferencesReviewsResources

How many dimensions are required to find an adversarial example?

Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster, Eleanor Byler

Published 2023-03-24Version 1

Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspace $V$ in the ambient input space $\mathcal{X}$. Motivated by this, in this work we investigate how adversarial vulnerability depends on $\dim(V)$. In particular, we show that the adversarial success of standard PGD attacks with $\ell^p$ norm constraints behaves like a monotonically increasing function of $\epsilon (\frac{\dim(V)}{\dim \mathcal{X}})^{\frac{1}{q}}$ where $\epsilon$ is the perturbation budget and $\frac{1}{p} + \frac{1}{q} =1$, provided $p > 1$ (the case $p=1$ presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.

Related articles: Most relevant | Search more
arXiv:1905.07672 [cs.LG] (Published 2019-05-19)
Things You May Not Know About Adversarial Example: A Black-box Adversarial Image Attack
arXiv:1905.00441 [cs.LG] (Published 2019-05-01)
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
arXiv:2203.17209 [cs.LG] (Published 2022-03-31)
Adversarial Examples in Random Neural Networks with General Activations