arXiv Analytics

Sign in

arXiv:1910.04214 [cs.LG]AbstractReferencesReviewsResources

Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

Gal Yona, Amirata Ghorbani, James Zou

Published 2019-10-09Version 1

A fancy learning algorithm $A$ outperforms a baseline method $B$ when they are both trained on the same data. Should $A$ get all of the credit for the improved performance or does the training data also deserve some credit? When deployed in a new setting from a different domain, however, $A$ makes more mistakes than $B$. How much of the blame should go to the learning algorithm or the training data? Such questions are becoming increasingly important and prevalent as we aim to make ML more accountable. Their answers would also help us allocate resources between algorithm design and data collection. In this paper, we formalize these questions and provide a principled Extended Shapley framework to jointly quantify the contribution of the learning algorithm and training data. Extended Shapley uniquely satisfies several natural properties that ensure equitable treatment of data and algorithm. Through experiments and theoretical analysis, we demonstrate that Extended Shapley has several important applications: 1) it provides a new metric of ML performance improvement that disentangles the influence of the data regime and the algorithm; 2) it facilitates ML accountability by properly assigning responsibility for mistakes; 3) it provides more robustness to manipulation by the ML designer.

Related articles: Most relevant | Search more
arXiv:1710.11622 [cs.LG] (Published 2017-10-31)
Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm
arXiv:1312.3970 [cs.LG] (Published 2013-12-13)
An Extensive Evaluation of Filtering Misclassified Instances in Supervised Classification Tasks
arXiv:2210.02654 [cs.LG] (Published 2022-10-06)
Learning Algorithms for Intelligent Agents and Mechanisms