arXiv Analytics

Sign in

arXiv:2204.09462 [cs.LG]AbstractReferencesReviewsResources

Quantity vs Quality: Investigating the Trade-Off between Sample Size and Label Reliability

Timo Bertram, Johannes Fürnkranz, Martin Müller

Published 2022-04-20Version 1

In this paper, we study learning in probabilistic domains where the learner may receive incorrect labels but can improve the reliability of labels by repeatedly sampling them. In such a setting, one faces the problem of whether the fixed budget for obtaining training examples should rather be used for obtaining all different examples or for improving the label quality of a smaller number of examples by re-sampling their labels. We motivate this problem in an application to compare the strength of poker hands where the training signal depends on the hidden community cards, and then study it in depth in an artificial setting where we insert controlled noise levels into the MNIST database. Our results show that with increasing levels of noise, resampling previous examples becomes increasingly more important than obtaining new examples, as classifier performance deteriorates when the number of incorrect labels is too high. In addition, we propose two different validation strategies; switching from lower to higher validations over the course of training and using chi-square statistics to approximate the confidence in obtained labels.

Comments: Preliminary work under review for ICML 2022
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1811.07062 [cs.LG] (Published 2018-11-16, updated 2019-06-03)
The Full Spectrum of Deepnet Hessians at Scale: Dynamics with SGD Training and Sample Size
arXiv:2411.09900 [cs.LG] (Published 2024-11-15)
Statistical Analysis of Policy Space Compression Problem
arXiv:2312.12050 [cs.LG] (Published 2023-12-19)
Extension of the Dip-test Repertoire -- Efficient and Differentiable p-value Calculation for Clustering