arXiv Analytics

Sign in

arXiv:1709.02012 [cs.LG]AbstractReferencesReviewsResources

On Fairness and Calibration

Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, Kilian Q. Weinberger

Published 2017-09-06Version 1

The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models, and this has motivated a growing line of work on what it means for a classification procedure to be "fair." In particular, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets.

Comments: First two authors contributed equally. To appear in NIPS 2017
Categories: cs.LG, cs.CY, stat.ML
Related articles: Most relevant | Search more
arXiv:2401.14483 [cs.LG] (Published 2024-01-25)
Four Facets of Forecast Felicity: Calibration, Predictiveness, Randomness and Regret
arXiv:1909.02827 [cs.LG] (Published 2019-09-06)
Master your Metrics with Calibration
arXiv:2007.04206 [cs.LG] (Published 2020-07-08)
Diverse Ensembles Improve Calibration