arXiv Analytics

Sign in

arXiv:2308.01923 [cs.LG]AbstractReferencesReviewsResources

An Empirical Study on Fairness Improvement with Multiple Protected Attributes

Zhenpeng Chen, Jie M. Zhang, Federica Sarro, Mark Harman

Published 2023-07-25Version 1

Existing research mostly improves the fairness of Machine Learning (ML) software regarding a single protected attribute at a time, but this is unrealistic given that many users have multiple protected attributes. This paper conducts an extensive study of fairness improvement regarding multiple protected attributes, covering 11 state-of-the-art fairness improvement methods. We analyze the effectiveness of these methods with different datasets, metrics, and ML models when considering multiple protected attributes. The results reveal that improving fairness for a single protected attribute can largely decrease fairness regarding unconsidered protected attributes. This decrease is observed in up to 88.3% of scenarios (57.5% on average). More surprisingly, we find little difference in accuracy loss when considering single and multiple protected attributes, indicating that accuracy can be maintained in the multiple-attribute paradigm. However, the effect on precision and recall when handling multiple protected attributes is about 5 times and 8 times that of a single attribute. This has important implications for future fairness research: reporting only accuracy as the ML performance metric, which is currently common in the literature, is inadequate.

Related articles: Most relevant | Search more
arXiv:2311.05790 [cs.LG] (Published 2023-11-09)
The Paradox of Noise: An Empirical Study of Noise-Infusion Mechanisms to Improve Generalization, Stability, and Privacy in Federated Learning
arXiv:2002.09779 [cs.LG] (Published 2020-02-22)
Stochasticity in Neural ODEs: An Empirical Study
arXiv:1806.07755 [cs.LG] (Published 2018-06-19)
An empirical study on evaluation metrics of generative adversarial networks