arXiv Analytics

Sign in

arXiv:1806.08049 [cs.LG]AbstractReferencesReviewsResources

On the Robustness of Interpretability Methods

David Alvarez-Melis, Tommi S. Jaakkola

Published 2018-06-21Version 1

We argue that robustness of explanations---i.e., that similar inputs should give rise to similar explanations---is a key desideratum for interpretability. We introduce metrics to quantify robustness and demonstrate that current methods do not perform well according to these metrics. Finally, we propose ways that robustness can be enforced on existing interpretability approaches.

Comments: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2311.07073 [cs.LG] (Published 2023-11-13)
Exposition on over-squashing problem on GNNs: Current Methods, Benchmarks and Challenges
arXiv:2207.07769 [cs.LG] (Published 2022-07-15)
Anomalous behaviour in loss-gradient based interpretability methods
arXiv:2410.17980 [cs.LG] (Published 2024-10-23)
Stick-breaking Attention