arXiv Analytics

Sign in

arXiv:2306.11078 [stat.ML]AbstractReferencesReviewsResources

Beyond Normal: On the Evaluation of Mutual Information Estimators

Paweł Czyż, Frederic Grabowski, Julia E. Vogt, Niko Beerenwinkel, Alexander Marx

Published 2023-06-19Version 1

Mutual information is a general statistical dependency measure which has found applications in representation learning, causality, domain generalization and computational biology. However, mutual information estimators are typically evaluated on simple families of probability distributions, namely multivariate normal distribution and selected distributions with one-dimensional random variables. In this paper, we show how to construct a diverse family of distributions with known ground-truth mutual information and propose a language-independent benchmarking platform for mutual information estimators. We discuss the general applicability and limitations of classical and neural estimators in settings involving high dimensions, sparse interactions, long-tailed distributions, and high mutual information. Finally, we provide guidelines for practitioners on how to select appropriate estimator adapted to the difficulty of problem considered and issues one needs to consider when applying an estimator to a new data set.

Related articles: Most relevant | Search more
arXiv:2211.02912 [stat.ML] (Published 2022-11-05)
New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound
arXiv:1511.01844 [stat.ML] (Published 2015-11-05)
A note on the evaluation of generative models
arXiv:2106.01921 [stat.ML] (Published 2021-06-03)
Sample Selection Bias in Evaluation of Prediction Performance of Causal Models