arXiv Analytics

Sign in

arXiv:2208.03835 [cs.LG]AbstractReferencesReviewsResources

How Adversarial Robustness Transfers from Pre-training to Downstream Tasks

Laura Fee Nern, Yash Sharma

Published 2022-08-07Version 1

Given the rise of large-scale training regimes, adapting pre-trained models to a wide range of downstream tasks has become a standard approach in machine learning. While large benefits in empirical performance have been observed, it is not yet well understood how robustness properties transfer from a pre-trained model to a downstream task. We prove that the robustness of a predictor on downstream tasks can be bound by the robustness of its underlying representation, irrespective of the pre-training protocol. Taken together, our results precisely characterize what is required of the representation function for reliable performance upon deployment.

Related articles: Most relevant | Search more
arXiv:2301.09820 [cs.LG] (Published 2023-01-24)
A Stability Analysis of Fine-Tuning a Pre-Trained Model
arXiv:2411.12600 [cs.LG] (Published 2024-11-19)
Provable unlearning in topic modeling and downstream tasks
arXiv:2206.03826 [cs.LG] (Published 2022-06-08)
Towards Understanding Why Mask-Reconstruction Pretraining Helps in Downstream Tasks