arXiv Analytics

Sign in

arXiv:2206.02790 [cs.LG]AbstractReferencesReviewsResources

Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence

Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg

Published 2022-06-06Version 1

In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies. Showing confidence scores in human-agent interaction systems can help build trust between humans and AI systems. However, most existing research only used the confidence score as a form of communication, and we still lack ways to explain why the algorithm is confident. This paper also presents two methods for understanding model confidence using counterfactual explanation: (1) based on counterfactual examples; and (2) based on visualisation of the counterfactual space.

Comments: 8 pages, Accepted to IJCAI Workshop on Explainable Artificial Intelligence 2022
Categories: cs.LG, cs.AI, cs.HC
Related articles: Most relevant | Search more
arXiv:2312.11747 [cs.LG] (Published 2023-12-18)
Robust Stochastic Graph Generator for Counterfactual Explanations
arXiv:2301.01520 [cs.LG] (Published 2023-01-04)
Counterfactual Explanations for Land Cover Mapping in a Multi-class Setting
arXiv:2304.14606 [cs.LG] (Published 2023-04-28)
Counterfactual Explanation with Missing Values