arXiv Analytics

Sign in

arXiv:2104.02532 [cs.LG]AbstractReferencesReviewsResources

On the Basis of Sex: A Review of Gender Bias in Machine Learning Applications

Tal Feldman, Ashley Peake

Published 2021-04-06Version 1

Machine Learning models have been deployed across almost every aspect of society, often in situations that affect the social welfare of many individuals. Although these models offer streamlined solutions to large problems, they may contain biases and treat groups or individuals unfairly. To our knowledge, this review is one of the first to focus specifically on gender bias in applications of machine learning. We first introduce several examples of machine learning gender bias in practice. We then detail the most widely used formalizations of fairness in order to address how to make machine learning models fairer. Specifically, we discuss the most influential bias mitigation algorithms as applied to domains in which models have a high propensity for gender discrimination. We group these algorithms into two overarching approaches -- removing bias from the data directly and removing bias from the model through training -- and we present representative examples of each. As society increasingly relies on artificial intelligence to help in decision-making, addressing gender biases present in these models is imperative. To provide readers with the tools to assess the fairness of machine learning models and mitigate the biases present in them, we discuss multiple open source packages for fairness in AI.

Related articles: Most relevant | Search more
arXiv:1212.1100 [cs.LG] (Published 2012-12-05)
Making Early Predictions of the Accuracy of Machine Learning Applications
arXiv:2312.00237 [cs.LG] (Published 2023-11-30)
Negotiated Representations to Prevent Forgetting in Machine Learning Applications
arXiv:2404.03082 [cs.LG] (Published 2024-04-03)
Machine Learning and Data Analysis Using Posets: A Survey