arXiv Analytics

Sign in

arXiv:2010.11415 [cs.LG]AbstractReferencesReviewsResources

Maximum Mean Discrepancy is Aware of Adversarial Attacks

Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

Published 2020-10-22Version 1

The maximum mean discrepancy (MMD) test, as a representative two-sample test, could in principle detect any distributional discrepancy between two datasets. However, it has been shown that MMD is unaware of adversarial attacks---MMD failed to detect the discrepancy between natural data and adversarial data generated by adversarial attacks. Given this phenomenon, we raise a question: are natural and adversarial data really from different distributions but previous use of MMD on the purpose missed some key factors? The answer is affirmative. We find the previous use missed three factors and accordingly we propose three components: (a) Gaussian kernel has limited representation power, and we replace it with a novel semantic-aware deep kernel; (b) test power of MMD was neglected, and we maximize it in order to optimize our deep kernel; (c) adversarial data may be non-independent, and to this end we apply wild bootstrap for validity of the test power. By taking care of the three factors, we validate that MMD is aware of adversarial attacks, which lights up a novel road for adversarial attack detection based on two-sample tests.

Related articles: Most relevant | Search more
arXiv:2103.05793 [cs.LG] (Published 2021-03-10)
Universal Approximation of Residual Flows in Maximum Mean Discrepancy
arXiv:2006.10255 [cs.LG] (Published 2020-06-18)
Calibrated Reliable Regression using Maximum Mean Discrepancy
arXiv:1906.00088 [cs.LG] (Published 2019-05-31)
Diversity-Inducing Policy Gradient: Using Maximum Mean Discrepancy to Find a Set of Diverse Policies