arXiv:2106.07756 [cs.LG]AbstractReferencesReviewsResources
Counterfactual Explanations for Machine Learning: Challenges Revisited
Sahil Verma, John Dickerson, Keegan Hines
Published 2021-06-14Version 1
Counterfactual explanations (CFEs) are an emerging technique under the umbrella of interpretability of machine learning (ML) models. They provide ``what if'' feedback of the form ``if an input datapoint were $x'$ instead of $x$, then an ML model's output would be $y'$ instead of $y$.'' Counterfactual explainability for ML models has yet to see widespread adoption in industry. In this short paper, we posit reasons for this slow uptake. Leveraging recent work outlining desirable properties of CFEs and our experience running the ML wing of a model monitoring startup, we identify outstanding obstacles hindering CFE deployment in industry.
Comments: Presented at CHI HCXAI 2021 workshop
Related articles: Most relevant | Search more
arXiv:2008.02195 [cs.LG] (Published 2020-08-05)
Machine Learning in Nano-Scale Biomedical Engineering
Alexandros-Apostolos A. Boulogeorgos, Stylianos E. Trevlakis, Sotiris A. Tegos, Vasilis K. Papanikolaou, George K. Karagiannidis
arXiv:2205.05748 [cs.LG] (Published 2022-05-11)
Tiny Robot Learning: Challenges and Directions for Machine Learning in Resource-Constrained Robots
Sabrina M. Neuman et al.
arXiv:1711.04708 [cs.LG] (Published 2017-11-13)
Machine Learning for the Geosciences: Challenges and Opportunities