arXiv Analytics

Sign in

arXiv:1811.11705 [cs.LG]AbstractReferencesReviewsResources

An Adversarial Approach for Explainable AI in Intrusion Detection Systems

Daniel L. Marino, Chathurika S. Wickramasinghe, Milos Manic

Published 2018-11-28Version 1

Despite the growing popularity of modern machine learning techniques (e.g. Deep Neural Networks) in cyber-security applications, most of these models are perceived as a black-box for the user. Adversarial machine learning offers an approach to increase our understanding of these models. In this paper we present an approach to generate explanations for incorrect classifications made by data-driven Intrusion Detection Systems (IDSs). An adversarial approach is used to find the minimum modifications (of the input features) required to correctly classify a given set of misclassified samples. The magnitude of such modifications is used to visualize the most relevant features that explain the reason for the misclassification. The presented methodology generated satisfactory explanations that describe the reasoning behind the mis-classifications, with descriptions that match expert knowledge. The advantages of the presented methodology are: 1) applicable to any classifier with defined gradients. 2) does not require any modification of the classifier model. 3) can be extended to perform further diagnosis (e.g. vulnerability assessment) and gain further understanding of the system. Experimental evaluation was conducted on the NSL-KDD99 benchmark dataset using Linear and Multilayer perceptron classifiers. The results are shown using intuitive visualizations in order to improve the interpretability of the results.

Related articles: Most relevant | Search more
arXiv:1909.11835 [cs.LG] (Published 2019-09-26)
GAMIN: An Adversarial Approach to Black-Box Model Inversion
arXiv:2006.11194 [cs.LG] (Published 2020-06-19)
Does Explainable Artificial Intelligence Improve Human Decision-Making?
arXiv:2005.00130 [cs.LG] (Published 2020-04-30)
Hide-and-Seek: A Template for Explainable AI