arXiv Analytics

Sign in

arXiv:1905.04519 [cs.LG]AbstractReferencesReviewsResources

Interpret Federated Learning with Shapley Values

Guan Wang

Published 2019-05-11Version 1

Federated Learning is introduced to protect privacy by distributing training data into multiple parties. Each party trains its own model and a meta-model is constructed from the sub models. In this way the details of the data are not disclosed in between each party. In this paper we investigate the model interpretation methods for Federated Learning, specifically on the measurement of feature importance of vertical Federated Learning where feature space of the data is divided into two parties, namely host and guest. For host party to interpret a single prediction of vertical Federated Learning model, the interpretation results, namely the feature importance, are very likely to reveal the protected data from guest party. We propose a method to balance the model interpretability and data privacy in vertical Federated Learning by using Shapley values to reveal detailed feature importance for host features and a unified importance value for federated guest features. Our experiments indicate robust and informative results for interpreting Federated Learning models.

Related articles: Most relevant | Search more
arXiv:2207.11788 [cs.LG] (Published 2022-07-24, updated 2022-08-07)
Privacy Against Inference Attacks in Vertical Federated Learning
arXiv:2405.20761 [cs.LG] (Published 2024-05-31)
Share Your Secrets for Privacy! Confidential Forecasting with Vertical Federated Learning
arXiv:2410.22564 [cs.LG] (Published 2024-10-29)
Vertical Federated Learning with Missing Features During Training and Inference