arXiv Analytics

Sign in

arXiv:1802.03788 [cs.LG]AbstractReferencesReviewsResources

Influence-Directed Explanations for Deep Convolutional Networks

Klas Leino, Linyi Li, Shayak Sen, Anupam Datta, Matt Fredrikson

Published 2018-02-11Version 1

We study the problem of explaining a rich class of behavioral properties of deep neural networks. Distinctively, our influence-directed explanations approach this problem by peering inside the net- work to identify neurons with high influence on the property and distribution of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by training convolutional neural net- works on MNIST, ImageNet, Pubfig, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations (1) identify influential concepts that generalize across instances, (2) help extract the essence of what the network learned about a class, (3) isolate individual features the network uses to make decisions and distinguish related instances, and (4) assist in understanding misclassifications.

Related articles: Most relevant | Search more
arXiv:1605.09593 [cs.LG] (Published 2016-05-31)
Controlling Exploration Improves Training for Deep Neural Networks
arXiv:1706.05098 [cs.LG] (Published 2017-06-15)
An Overview of Multi-Task Learning in Deep Neural Networks
arXiv:1708.01911 [cs.LG] (Published 2017-08-06)
Training of Deep Neural Networks based on Distance Measures using RMSProp