arXiv Analytics

Sign in

arXiv:1905.12686 [cs.LG]AbstractReferencesReviewsResources

Learning Representations by Humans, for Humans

Sophie Hilgard, Nir Rosenfeld, Mahzarin R. Banaji, Jack Cao, David C. Parkes

Published 2019-05-29Version 1

We propose a new, complementary approach to interpretability, in which machines are not considered as experts whose role it is to suggest what should be done and why, but rather as advisers. The objective of these models is to communicate to a human decision-maker not what to decide but how to decide. In this way, we propose that machine learning pipelines will be more readily adopted, since they allow a decision-maker to retain agency. Specifically, we develop a framework for learning representations by humans, for humans, in which we learn representations of inputs ("advice") that are effective for human decision-making. Representation-generating models are trained with humans-in-the-loop, implicitly incorporating the human decision-making model. We show that optimizing for human decision-making rather than accuracy is effective in promoting good decisions in various classification tasks while inherently maintaining a sense of interpretability.

Related articles: Most relevant | Search more
arXiv:2208.14322 [cs.LG] (Published 2022-08-30)
Learning Representations for Hyper-Relational Knowledge Graphs
arXiv:1806.10069 [cs.LG] (Published 2018-06-26)
Deep $k$-Means: Jointly Clustering with $k$-Means and Learning Representations
arXiv:2101.11201 [cs.LG] (Published 2021-01-27)
Similarity of Classification Tasks