{ "id": "2303.14173", "version": "v1", "published": "2023-03-24T17:36:15.000Z", "updated": "2023-03-24T17:36:15.000Z", "title": "How many dimensions are required to find an adversarial example?", "authors": [ "Charles Godfrey", "Henry Kvinge", "Elise Bishoff", "Myles Mckay", "Davis Brown", "Tim Doster", "Eleanor Byler" ], "comment": "Comments welcome!", "categories": [ "cs.LG", "cs.CR", "stat.ML" ], "abstract": "Past work exploring adversarial vulnerability have focused on situations where an adversary can perturb all dimensions of model input. On the other hand, a range of recent works consider the case where either (i) an adversary can perturb a limited number of input parameters or (ii) a subset of modalities in a multimodal problem. In both of these cases, adversarial examples are effectively constrained to a subspace $V$ in the ambient input space $\\mathcal{X}$. Motivated by this, in this work we investigate how adversarial vulnerability depends on $\\dim(V)$. In particular, we show that the adversarial success of standard PGD attacks with $\\ell^p$ norm constraints behaves like a monotonically increasing function of $\\epsilon (\\frac{\\dim(V)}{\\dim \\mathcal{X}})^{\\frac{1}{q}}$ where $\\epsilon$ is the perturbation budget and $\\frac{1}{p} + \\frac{1}{q} =1$, provided $p > 1$ (the case $p=1$ presents additional subtleties which we analyze in some detail). This functional form can be easily derived from a simple toy linear model, and as such our results land further credence to arguments that adversarial examples are endemic to locally linear models on high dimensional spaces.", "revisions": [ { "version": "v1", "updated": "2023-03-24T17:36:15.000Z" } ], "analyses": { "subjects": [ "68T07", "G.3", "I.2", "I.5", "J.2" ], "keywords": [ "adversarial example", "dimensions", "past work exploring adversarial vulnerability", "simple toy linear model", "high dimensional spaces" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }