arXiv Analytics

Sign in

arXiv:2104.04148 [cs.LG]AbstractReferencesReviewsResources

Individual Explanations in Machine Learning Models: A Case Study on Poverty Estimation

Alfredo Carrillo, Luis F. CantĂș, Luis Tejerina, Alejandro Noriega

Published 2021-04-09Version 1

Machine learning methods are being increasingly applied in sensitive societal contexts, where decisions impact human lives. Hence it has become necessary to build capabilities for providing easily-interpretable explanations of models' predictions. Recently in academic literature, a vast number of explanations methods have been proposed. Unfortunately, to our knowledge, little has been documented about the challenges machine learning practitioners most often face when applying them in real-world scenarios. For example, a typical procedure such as feature engineering can make some methodologies no longer applicable. The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods. And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain -- poverty estimation and its use for prioritizing access to social policies.

Related articles: Most relevant | Search more
arXiv:1806.07129 [cs.LG] (Published 2018-06-19)
Instance-Level Explanations for Fraud Detection: A Case Study
arXiv:1810.05524 [cs.LG] (Published 2018-10-10)
Introducing a hybrid model of DEA and data mining in evaluating efficiency. Case study: Bank Branches
arXiv:1901.09895 [cs.LG] (Published 2019-01-27)
Modularization of End-to-End Learning: Case Study in Arcade Games