arXiv Analytics

Sign in

arXiv:2205.13662 [stat.ML]AbstractReferencesReviewsResources

Explaining Preferences with Shapley Values

Robert Hu, Siu Lun Chau, Jaime Ferrando Huertas, Dino Sejdinovic

Published 2022-05-26Version 1

While preference modelling is becoming one of the pillars of machine learning, the problem of preference explanation remains challenging and underexplored. In this paper, we propose \textsc{Pref-SHAP}, a Shapley value-based model explanation framework for pairwise comparison data. We derive the appropriate value functions for preference models and further extend the framework to model and explain \emph{context specific} information, such as the surface type in a tennis game. To demonstrate the utility of \textsc{Pref-SHAP}, we apply our method to a variety of synthetic and real-world datasets and show that richer and more insightful explanations can be obtained over the baseline.

Related articles: Most relevant | Search more
arXiv:1903.10464 [stat.ML] (Published 2019-03-25)
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
arXiv:1909.03495 [stat.ML] (Published 2019-09-08)
Shapley Values of Reconstruction Errors of PCA for Explaining Anomaly Detection
arXiv:2106.12228 [stat.ML] (Published 2021-06-23)
groupShapley: Efficient prediction explanation with Shapley values for feature groups