arXiv Analytics

Sign in

arXiv:2102.06307 [cs.LG]AbstractReferencesReviewsResources

What does LIME really see in images?

Damien Garreau, Dina Mardaoui

Published 2021-02-11Version 1

The performance of modern algorithms on certain computer vision tasks such as object recognition is now close to that of humans. This success was achieved at the price of complicated architectures depending on millions of parameters and it has become quite challenging to understand how particular predictions are made. Interpretability methods propose to give us this understanding. In this paper, we study LIME, perhaps one of the most popular. On the theoretical side, we show that when the number of generated examples is large, LIME explanations are concentrated around a limit explanation for which we give an explicit expression. We further this study for elementary shape detectors and linear models. As a consequence of this analysis, we uncover a connection between LIME and integrated gradients, another explanation method. More precisely, the LIME explanations are similar to the sum of integrated gradients over the superpixels used in the preprocessing step of LIME.

Related articles: Most relevant | Search more
arXiv:2310.04821 [cs.LG] (Published 2023-10-07, updated 2023-10-10)
Rethink Baseline of Integrated Gradients from the Perspective of Shapley Value
Shuyang Liu et al.
arXiv:2103.13533 [cs.LG] (Published 2021-03-25)
Symmetry-Preserving Paths in Integrated Gradients
arXiv:2205.13152 [cs.LG] (Published 2022-05-26)
Transferable Adversarial Attack based on Integrated Gradients