arXiv Analytics

Sign in

arXiv:2004.10824 [cs.LG]AbstractReferencesReviewsResources

Assessing the Reliability of Visual Explanations of Deep Models with Adversarial Perturbations

Dan Valle, Tiago Pimentel, Adriano Veloso

Published 2020-04-22Version 1

The interest in complex deep neural networks for computer vision applications is increasing. This leads to the need for improving the interpretable capabilities of these models. Recent explanation methods present visualizations of the relevance of pixels from input images, thus enabling the direct interpretation of properties of the input that lead to a specific output. These methods produce maps of pixel importance, which are commonly evaluated by visual inspection. This means that the effectiveness of an explanation method is assessed based on human expectation instead of actual feature importance. Thus, in this work we propose an objective measure to evaluate the reliability of explanations of deep models. Specifically, our approach is based on changes in the network's outcome resulting from the perturbation of input images in an adversarial way. We present a comparison between widely-known explanation methods using our proposed approach. Finally, we also propose a straightforward application of our approach to clean relevance maps, creating more interpretable maps without any loss in essential explanation (as per our proposed measure).

Comments: Accepted for publication at IJCNN 2020
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1911.02048 [cs.LG] (Published 2019-11-05)
Guided Layer-wise Learning for Deep Models using Side Information
arXiv:1903.09033 [cs.LG] (Published 2019-03-21)
Deep Models for Relational Databases
arXiv:2004.04919 [cs.LG] (Published 2020-04-10)
Luring of Adversarial Perturbations