arXiv Analytics

Sign in

arXiv:2501.11462 [cs.CV]AbstractReferencesReviewsResources

On the Adversarial Vulnerabilities of Transfer Learning in Remote Sensing

Tao Bai, Xingjian Tian, Yonghao Xu, Bihan Wen

Published 2025-01-20Version 1

The use of pretrained models from general computer vision tasks is widespread in remote sensing, significantly reducing training costs and improving performance. However, this practice also introduces vulnerabilities to downstream tasks, where publicly available pretrained models can be used as a proxy to compromise downstream models. This paper presents a novel Adversarial Neuron Manipulation method, which generates transferable perturbations by selectively manipulating single or multiple neurons in pretrained models. Unlike existing attacks, this method eliminates the need for domain-specific information, making it more broadly applicable and efficient. By targeting multiple fragile neurons, the perturbations achieve superior attack performance, revealing critical vulnerabilities in deep learning models. Experiments on diverse models and remote sensing datasets validate the effectiveness of the proposed method. This low-access adversarial neuron manipulation technique highlights a significant security risk in transfer learning models, emphasizing the urgent need for more robust defenses in their design when addressing the safety-critical remote sensing tasks.

Comments: This work has been submitted to the IEEE for possible publication
Categories: cs.CV, eess.IV
Related articles: Most relevant | Search more
arXiv:2401.08837 [cs.CV] (Published 2024-01-16)
Image Fusion in Remote Sensing: An Overview and Meta Analysis
arXiv:2006.05180 [cs.CV] (Published 2020-06-09)
Breaking the Limits of Remote Sensing by Simulation and Deep Learning for Flood and Debris Flow Mapping
arXiv:2311.18082 [cs.CV] (Published 2023-11-29)
Zooming Out on Zooming In: Advancing Super-Resolution for Remote Sensing