arXiv Analytics

Sign in

arXiv:2211.02912 [stat.ML]AbstractReferencesReviewsResources

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora

Published 2022-11-05Version 1

Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net. Evaluations of saliency methods convert this heat map into a new {\em masked input} by retaining the $k$ highest-ranked pixels of the original input and replacing the rest with \textquotedblleft uninformative\textquotedblright\ pixels, and checking if the net's output is mostly unchanged. This is usually seen as an {\em explanation} of the output, but the current paper highlights reasons why this inference of causality may be suspect. Inspired by logic concepts of {\em completeness \& soundness}, it observes that the above type of evaluation focuses on completeness of the explanation, but ignores soundness. New evaluation metrics are introduced to capture both notions, while staying in an {\em intrinsic} framework -- i.e., using the dataset and the net, but no separately trained nets, human evaluations, etc. A simple saliency method is described that matches or outperforms prior methods in the evaluations. Experiments also suggest new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.

Comments: NeurIPS 2022 (Oral)
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:1511.01844 [stat.ML] (Published 2015-11-05)
A note on the evaluation of generative models
arXiv:2106.01921 [stat.ML] (Published 2021-06-03)
Sample Selection Bias in Evaluation of Prediction Performance of Causal Models
arXiv:2306.11078 [stat.ML] (Published 2023-06-19)
Beyond Normal: On the Evaluation of Mutual Information Estimators