arXiv Analytics

Sign in

arXiv:2302.08077 [cs.LG]AbstractReferencesReviewsResources

Group Fairness with Uncertainty in Sensitive Attributes

Abhin Shah, Maohao Shen, Jongha Jon Ryu, Subhro Das, Prasanna Sattigeri, Yuheng Bu, Gregory W. Wornell

Published 2023-02-16Version 1

We consider learning a fair predictive model when sensitive attributes are uncertain, say, due to a limited amount of labeled data, collection bias, or privacy mechanism. We formulate the problem, for the independence notion of fairness, using the information bottleneck principle, and propose a robust optimization with respect to an uncertainty set of the sensitive attributes. As an illustrative case, we consider the joint Gaussian model and reduce the task to a quadratically constrained quadratic problem (QCQP). To ensure a strict fairness guarantee, we propose a robust QCQP and completely characterize its solution with an intuitive geometric understanding. When uncertainty arises due to limited labeled sensitive attributes, our analysis reveals the contribution of each new sample towards the optimal performance achieved with unlimited access to labeled sensitive attributes. This allows us to identify non-trivial regimes where uncertainty incurs no performance loss of the proposed algorithm while continuing to guarantee strict fairness. We also propose a bootstrap-based generic algorithm that is applicable beyond the Gaussian case. We demonstrate the value of our analysis and method on synthetic data as well as real-world classification and regression tasks.

Related articles: Most relevant | Search more
arXiv:2304.00295 [cs.LG] (Published 2023-04-01)
Fair-CDA: Continuous and Directional Augmentation for Group Fairness
Rui Sun et al.
arXiv:2206.12292 [cs.LG] (Published 2022-06-23)
InfoAT: Improving Adversarial Training Using the Information Bottleneck Principle
arXiv:2008.09490 [cs.LG] (Published 2020-08-21)
Beyond Individual and Group Fairness