arXiv Analytics

Sign in

arXiv:2106.04684 [cs.LG]AbstractReferencesReviewsResources

Explainable AI for medical imaging: Explaining pneumothorax diagnoses with Bayesian Teaching

Tomas Folke, Scott Cheng-Hsin Yang, Sean Anderson, Patrick Shafto

Published 2021-06-08Version 1

Limited expert time is a key bottleneck in medical imaging. Due to advances in image classification, AI can now serve as decision-support for medical experts, with the potential for great gains in radiologist productivity and, by extension, public health. However, these gains are contingent on building and maintaining experts' trust in the AI agents. Explainable AI may build such trust by helping medical experts to understand the AI decision processes behind diagnostic judgements. Here we introduce and evaluate explanations based on Bayesian Teaching, a formal account of explanation rooted in the cognitive science of human learning. We find that medical experts exposed to explanations generated by Bayesian Teaching successfully predict the AI's diagnostic decisions and are more likely to certify the AI for cases when the AI is correct than when it is wrong, indicating appropriate trust. These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.

Journal: Proc. SPIE 11746, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, 117462J (2021)
Categories: cs.LG, cs.AI, cs.HC
Related articles: Most relevant | Search more
arXiv:2208.11404 [cs.LG] (Published 2022-08-24)
Augmented cross-selling through explainable AI -- a case from energy retailing
arXiv:2406.15789 [cs.LG] (Published 2024-06-22)
Privacy Implications of Explainable AI in Data-Driven Systems
arXiv:2005.00130 [cs.LG] (Published 2020-04-30)
Hide-and-Seek: A Template for Explainable AI