arXiv Analytics

Sign in

arXiv:2001.00396 [stat.ML]AbstractReferencesReviewsResources

Restricting the Flow: Information Bottlenecks for Attribution

Karl Schulz, Leon Sixt, Federico Tombari, Tim Landgraf

Published 2020-01-02Version 1

Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work we adapt the information bottleneck concept for attribution. By adding noise to intermediate feature maps we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method's information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision.

Related articles: Most relevant | Search more
arXiv:1805.01930 [stat.ML] (Published 2018-05-04)
Enhancing the Regularization Effect of Weight Pruning in Artificial Neural Networks
arXiv:2002.11152 [stat.ML] (Published 2020-02-25)
Fundamental Issues Regarding Uncertainties in Artificial Neural Networks
arXiv:2008.03920 [stat.ML] (Published 2020-08-10)
Do ideas have shape? Plato's theory of forms as the continuous limit of artificial neural networks