arXiv Analytics

Sign in

arXiv:2410.16449 [stat.ML]AbstractReferencesReviewsResources

Robust Feature Learning for Multi-Index Models in High Dimensions

Alireza Mousavi-Hosseini, Adel Javanmard, Murat A. Erdogdu

Published 2024-10-21Version 1

Recently, there have been numerous studies on feature learning with neural networks, specifically on learning single- and multi-index models where the target is a function of a low-dimensional projection of the input. Prior works have shown that in high dimensions, the majority of the compute and data resources are spent on recovering the low-dimensional projection; once this subspace is recovered, the remainder of the target can be learned independently of the ambient dimension. However, implications of feature learning in adversarial settings remain unexplored. In this work, we take the first steps towards understanding adversarially robust feature learning with neural networks. Specifically, we prove that the hidden directions of a multi-index model offer a Bayes optimal low-dimensional projection for robustness against $\ell_2$-bounded adversarial perturbations under the squared loss, assuming that the multi-index coordinates are statistically independent from the rest of the coordinates. Therefore, robust learning can be achieved by first performing standard feature learning, then robustly tuning a linear readout layer on top of the standard representations. In particular, we show that adversarially robust learning is just as easy as standard learning, in the sense that the additional number of samples needed to robustly learn multi-index models when compared to standard learning, does not depend on dimensionality.

Related articles: Most relevant | Search more
arXiv:1802.03039 [stat.ML] (Published 2018-02-08)
Imitation networks: Few-shot learning of neural networks from scratch
arXiv:1810.05546 [stat.ML] (Published 2018-10-12)
Uncertainty in Neural Networks: Bayesian Ensembling
arXiv:1710.03667 [stat.ML] (Published 2017-10-10)
High-dimensional dynamics of generalization error in neural networks