arXiv:2009.06349 [cs.LG]AbstractReferencesReviewsResources
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford, Eoin M. Kenny, Mark T. Keane
Published 2020-09-10Version 1
This paper reports two experiments (N=349) on the impact of post hoc explanations by example and error rates on peoples perceptions of a black box classifier. Both experiments show that when people are given case based explanations, from an implemented ANN CBR twin system, they perceive miss classifications to be more correct. They also show that as error rates increase above 4%, people trust the classifier less and view it as being less correct, less reasonable and less trustworthy. The implications of these results for XAI are discussed.
Comments: 2 Figures, 1 Table, 8 pages
Journal: IJCAI-20 Workshop on Explainable AI (XAI), September 2020
Tags: journal article
Related articles: Most relevant | Search more
arXiv:2410.18066 [cs.LG] (Published 2024-10-23)
The Double-Edged Sword of Behavioral Responses in Strategic Classification: Theory and User Studies
arXiv:2402.05007 [cs.LG] (Published 2024-02-07)
Example-based Explanations for Random Forests using Machine Unlearning
arXiv:2311.14581 [cs.LG] (Published 2023-11-24)
Example-Based Explanations of Random Forest Predictions