arXiv:1910.07856 [cs.LG]AbstractReferencesReviewsResources
Effect of Superpixel Aggregation on Explanations in LIME -- A Case Study with Biological Data
Ludwig Schallner, Johannes Rabold, Oliver Scholz, Ute Schmid
Published 2019-10-17Version 1
End-to-end learning with deep neural networks, such as convolutional neural networks (CNNs), has been demonstrated to be very successful for different tasks of image classification. To make decisions of black-box approaches transparent, different solutions have been proposed. LIME is an approach to explainable AI relying on segmenting images into superpixels based on the Quick-Shift algorithm. In this paper, we present an explorative study of how different superpixel methods, namely Felzenszwalb, SLIC and Compact-Watershed, impact the generated visual explanations. We compare the resulting relevance areas with the image parts marked by a human reference. Results show that image parts selected as relevant strongly vary depending on the applied method. Quick-Shift resulted in the least and Compact-Watershed in the highest correspondence with the reference relevance areas.