{ "id": "2108.01807", "version": "v1", "published": "2021-08-04T01:57:00.000Z", "updated": "2021-08-04T01:57:00.000Z", "title": "On the Robustness of Domain Adaption to Adversarial Attacks", "authors": [ "Liyuan Zhang", "Yuhang Zhou", "Lei Zhang" ], "comment": "10 pages, 4 figures", "categories": [ "cs.CV" ], "abstract": "State-of-the-art deep neural networks (DNNs) have been proved to have excellent performance on unsupervised domain adaption (UDA). However, recent work shows that DNNs perform poorly when being attacked by adversarial samples, where these attacks are implemented by simply adding small disturbances to the original images. Although plenty of work has focused on this, as far as we know, there is no systematic research on the robustness of unsupervised domain adaption model. Hence, we discuss the robustness of unsupervised domain adaption against adversarial attacking for the first time. We benchmark various settings of adversarial attack and defense in domain adaption, and propose a cross domain attack method based on pseudo label. Most importantly, we analyze the impact of different datasets, models, attack methods and defense methods. Directly, our work proves the limited robustness of unsupervised domain adaptation model, and we hope our work may facilitate the community to pay more attention to improve the robustness of the model against attacking.", "revisions": [ { "version": "v1", "updated": "2021-08-04T01:57:00.000Z" } ], "analyses": { "keywords": [ "adversarial attack", "robustness", "state-of-the-art deep neural networks", "cross domain attack method", "unsupervised domain adaptation model" ], "note": { "typesetting": "TeX", "pages": 10, "language": "en", "license": "arXiv", "status": "editable" } } }