arXiv Analytics

Sign in

arXiv:2207.03324 [cs.LG]AbstractReferencesReviewsResources

Calibrate to Interpret

Gregory Scafarto, Nicolas Posocco, Antoine Bonnefoy

Published 2022-07-07Version 1

Trustworthy machine learning is driving a large number of ML community works in order to improve ML acceptance and adoption. The main aspect of trustworthy machine learning are the followings: fairness, uncertainty, robustness, explainability and formal guaranties. Each of these individual domains gains the ML community interest, visible by the number of related publications. However few works tackle the interconnection between these fields. In this paper we show a first link between uncertainty and explainability, by studying the relation between calibration and interpretation. As the calibration of a given model changes the way it scores samples, and interpretation approaches often rely on these scores, it seems safe to assume that the confidence-calibration of a model interacts with our ability to interpret such model. In this paper, we show, in the context of networks trained on image classification tasks, to what extent interpretations are sensitive to confidence-calibration. It leads us to suggest a simple practice to improve the interpretation outcomes: Calibrate to Interpret.

Comments: 16 pages, 9 figures, accepted at ECML PKDD 2022
Categories: cs.LG
Subjects: I.2.6, I.4.8, I.5.2
Related articles: Most relevant | Search more
arXiv:2303.17942 [cs.LG] (Published 2023-03-31)
Benchmarking FedAvg and FedCurv for Image Classification Tasks
arXiv:2004.09466 [cs.LG] (Published 2020-04-20)
Counterfactual confounding adjustment for feature representations learned by deep models: with an application to image classification tasks
arXiv:2402.03540 [cs.LG] (Published 2024-02-05)
Regulation Games for Trustworthy Machine Learning