arXiv Analytics

Sign in

arXiv:1909.07082 [cs.LG]AbstractReferencesReviewsResources

Towards A Rigorous Evaluation Of XAI Methods On Time Series

Udo Schlegel, Hiba Arnout, Mennatallah El-Assady, Daniela Oelke, Daniel A. Keim

Published 2019-09-16Version 1

Explainable Artificial Intelligence (XAI) methods are typically deployed to explain and debug black-box machine learning models. However, most proposed XAI methods are black-boxes themselves and designed for images. Thus, they rely on visual interpretability to evaluate and prove explanations. In this work, we apply XAI methods previously used in the image and text-domain on time series. We present a methodology to test and evaluate various XAI methods on time series by introducing new verification techniques to incorporate the temporal dimension. We further conduct preliminary experiments to assess the quality of selected XAI method explanations with various verification methods on a range of datasets and inspecting quality metrics on it. We demonstrate that in our initial experiments, SHAP works robust for all models, but others like DeepLIFT, LRP, and Saliency Maps work better with specific architectures.

Comments: 5 Pages 1 Figure 1 Table 1 Page Reference - 2019 ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models
Journal: 2019 ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:2105.08179 [cs.LG] (Published 2021-05-17, updated 2021-05-21)
Learning Disentangled Representations for Time Series
arXiv:2308.01578 [cs.LG] (Published 2023-08-03)
Unsupervised Representation Learning for Time Series: A Review
arXiv:2202.03944 [cs.LG] (Published 2022-02-08)
Detecting Anomalies within Time Series using Local Neural Transformations