arXiv Analytics

Sign in

arXiv:1611.07429 [stat.ML]AbstractReferencesReviewsResources

TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning

Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy

Published 2016-11-22Version 1

With the advent of highly predictive but opaque deep learning models, it has become more important than ever to understand and explain the predictions of such models. Existing approaches define interpretability as the inverse of complexity and achieve interpretability at the cost of accuracy. This introduces a risk of producing interpretable but misleading explanations. As humans, we are prone to engage in this kind of behavior \cite{mythos}. In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy. We propose to build a Treeview representation of the complex model via hierarchical partitioning of the feature space, which reveals the iterative rejection of unlikely class labels until the correct association is predicted.

Comments: Presented at NIPS 2016 Workshop on Interpretable Machine Learning in Complex Systems
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:1805.08266 [stat.ML] (Published 2018-05-21)
On the Selection of Initialization and Activation Function for Deep Neural Networks
arXiv:1402.1869 [stat.ML] (Published 2014-02-08, updated 2014-06-07)
On the Number of Linear Regions of Deep Neural Networks
arXiv:1712.09482 [stat.ML] (Published 2017-12-27)
Robust Loss Functions under Label Noise for Deep Neural Networks