arXiv Analytics

Sign in

arXiv:1910.07387 [cs.LG]AbstractReferencesReviewsResources

Explaining with Impact: A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms

Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, Alexander Wong

Published 2019-10-16Version 1

There has been a significant surge of interest recently around the concept of explainable artificial intelligence (XAI), where the goal is to produce an interpretation for a decision made by a machine learning algorithm. Of particular interest is the interpretation of how deep neural networks make decisions, given the complexity and `black box' nature of such networks. Given the infancy of the field, there has been very limited exploration into the assessment of the performance of explainability methods, with most evaluations centered around subjective visual interpretation of the produced interpretations. In this study, we explore a more machine-centric strategy for quantifying the performance of explainability methods on deep neural networks via the notion of decision-making impact analysis. We introduce two quantitative performance metrics: i) Impact Score, which assesses the percentage of critical factors with either strong confidence reduction impact or decision changing impact, and ii) Impact Coverage, which assesses the percentage coverage of adversarially impacted factors in the input. A comprehensive analysis using this approach was conducted on several state-of-the-art explainability methods (LIME, SHAP, Expected Gradients, GSInquire) on a ResNet-50 deep convolutional neural network using a subset of ImageNet for the task of image classification. Experimental results show that the critical regions identified by LIME within the tested images had the lowest impact on the decision-making process of the network (~38%), with progressive increase in decision-making impact for SHAP (~44%), Expected Gradients (~51%), and GSInquire (~76%). While by no means perfect, the hope is that the proposed machine-centric strategy helps push the conversation forward towards better metrics for evaluating explainability methods and improve trust in deep neural networks.

Related articles: Most relevant | Search more
arXiv:1411.1792 [cs.LG] (Published 2014-11-06)
How transferable are features in deep neural networks?
arXiv:1903.08297 [cs.LG] (Published 2019-03-20)
Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening
Nan Wu et al.
arXiv:1708.01666 [cs.LG] (Published 2017-07-31)
An Effective Training Method For Deep Convolutional Neural Network