arXiv Analytics

Sign in

arXiv:1805.07984 [stat.ML]AbstractReferencesReviewsResources

Adversarial Attacks on Classification Models for Graphs

Daniel Zügner, Amir Akbarnejad, Stephan Günnemann

Published 2018-05-21Version 1

Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we introduce the first study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models.

Related articles: Most relevant | Search more
arXiv:1906.00230 [stat.ML] (Published 2019-06-01)
Disentangling Improves VAEs' Robustness to Adversarial Attacks
arXiv:2204.06274 [stat.ML] (Published 2022-04-13)
Overparameterized Linear Regression under Adversarial Attacks
arXiv:1809.09262 [stat.ML] (Published 2018-09-25)
Neural Networks with Structural Resistance to Adversarial Attacks