arXiv Analytics

Sign in

arXiv:2108.02662 [cs.LG]AbstractReferencesReviewsResources

Reducing Unintended Bias of ML Models on Tabular and Textual Data

Guilherme Alves, Maxime Amblard, Fabien Bernier, Miguel Couceiro, Amedeo Napoli

Published 2021-08-05Version 1

Unintended biases in machine learning (ML) models are among the major concerns that must be addressed to maintain public trust in ML. In this paper, we address process fairness of ML models that consists in reducing the dependence of models on sensitive features, without compromising their performance. We revisit the framework FixOut that is inspired in the approach "fairness through unawareness" to build fairer models. We introduce several improvements such as automating the choice of FixOut's parameters. Also, FixOut was originally proposed to improve fairness of ML models on tabular data. We also demonstrate the feasibility of FixOut's workflow for models on textual data. We present several experimental results that illustrate the fact that FixOut improves process fairness on different classification settings.

Comments: 10 pages, 2 figures, 11 tables, to be published in DSAA 2021
Categories: cs.LG, cs.AI, cs.CY
Subjects: I.2.0, J.1, J.4
Related articles: Most relevant | Search more
arXiv:2103.16435 [cs.LG] (Published 2021-03-30)
EnergyVis: Interactively Tracking and Exploring Energy Consumption for ML Models
arXiv:2206.13655 [cs.LG] (Published 2022-06-27)
Deployment of ML Models using Kubeflow on Different Cloud Providers
arXiv:2004.00204 [cs.LG] (Published 2020-04-01)
Ontology-based Interpretable Machine Learning for Textual Data