arXiv Analytics

Sign in

arXiv:2105.11570 [cs.LG]AbstractReferencesReviewsResources

Robust Fairness-aware Learning Under Sample Selection Bias

Wei Du, Xintao Wu

Published 2021-05-24Version 1

The underlying assumption of many machine learning algorithms is that the training data and test data are drawn from the same distributions. However, the assumption is often violated in real world due to the sample selection bias between the training and test data. Previous research works focus on reweighing biased training data to match the test data and then building classification models on the reweighed training data. However, how to achieve fairness in the built classification models is under-explored. In this paper, we propose a framework for robust and fair learning under sample selection bias. Our framework adopts the reweighing estimation approach for bias correction and the minimax robust estimation approach for achieving robustness on prediction accuracy. Moreover, during the minimax optimization, the fairness is achieved under the worst case, which guarantees the model's fairness on test data. We further develop two algorithms to handle sample selection bias when test data is both available and unavailable. We conduct experiments on two real-world datasets and the experimental results demonstrate its effectiveness in terms of both utility and fairness metrics.

Related articles: Most relevant | Search more
arXiv:1708.03366 [cs.LG] (Published 2017-08-10)
Resilient Linear Classification: An Approach to Deal with Attacks on Training Data
arXiv:1901.05744 [cs.LG] (Published 2019-01-17)
The Oracle of DLphi
arXiv:1910.04214 [cs.LG] (Published 2019-10-09)
Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data