arXiv Analytics

Sign in

arXiv:1909.06342 [cs.LG]AbstractReferencesReviewsResources

Explainable Machine Learning in Deployment

Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José M. F. Moura, Peter Eckersley

Published 2019-09-13Version 1

Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by the model but for machine learning engineers, who use explainability to debug the model itself. There is a gap between explainability in practice and the goal of public transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations with current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability, including a focus on normative desiderata.

Related articles: Most relevant | Search more
arXiv:2009.11698 [cs.LG] (Published 2020-09-18)
Principles and Practice of Explainable Machine Learning
arXiv:2106.12543 [cs.LG] (Published 2021-06-23)
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
arXiv:2411.01956 [cs.LG] (Published 2024-11-04)
EXAGREE: Towards Explanation Agreement in Explainable Machine Learning