arXiv Analytics

Sign in

arXiv:2312.16191 [cs.LG]AbstractReferencesReviewsResources

SoK: Taming the Triangle -- On the Interplays between Fairness, Interpretability and Privacy in Machine Learning

Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala

Published 2023-12-22Version 1

Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction. Thus, it is crucial to ensure that the models learnt can be audited or understood by human users, do not create or reproduce discrimination or bias, and do not leak sensitive information regarding their training data. Indeed, interpretability, fairness and privacy are key requirements for the development of responsible machine learning, and all three have been studied extensively during the last decade. However, they were mainly considered in isolation, while in practice they interplay with each other, either positively or negatively. In this Systematization of Knowledge (SoK) paper, we survey the literature on the interactions between these three desiderata. More precisely, for each pairwise interaction, we summarize the identified synergies and tensions. These findings highlight several fundamental theoretical and empirical conflicts, while also demonstrating that jointly considering these different requirements is challenging when one aims at preserving a high level of utility. To solve this issue, we also discuss possible conciliation mechanisms, showing that a careful design can enable to successfully handle these different concerns in practice.

Related articles: Most relevant | Search more
arXiv:2001.02522 [cs.LG] (Published 2020-01-08)
On Interpretability of Artificial Neural Networks
arXiv:1811.10469 [cs.LG] (Published 2018-11-21)
How to improve the interpretability of kernel learning
arXiv:1910.03081 [cs.LG] (Published 2019-10-07)
On the Interpretability and Evaluation of Graph Representation Learning