arXiv Analytics

Sign in

arXiv:1406.2237 [stat.ML]AbstractReferencesReviewsResources

Reducing the Effects of Detrimental Instances

Michael R. Smith, Tony Martinez

Published 2014-06-09, updated 2014-10-14Version 2

Not all instances in a data set are equally beneficial for inducing a model of the data. Some instances (such as outliers or noise) can be detrimental. However, at least initially, the instances in a data set are generally considered equally in machine learning algorithms. Many current approaches for handling noisy and detrimental instances make a binary decision about whether an instance is detrimental or not. In this paper, we 1) extend this paradigm by weighting the instances on a continuous scale and 2) present a methodology for measuring how detrimental an instance may be for inducing a model of the data. We call our method of identifying and weighting detrimental instances reduced detrimental instance learning (RDIL). We examine RIDL on a set of 54 data sets and 5 learning algorithms and compare RIDL with other weighting and filtering approaches. RDIL is especially useful for learning algorithms where every instance can affect the classification boundary and the training instances are considered individually, such as multilayer perceptrons trained with backpropagation (MLPs). Our results also suggest that a more accurate estimate of which instances are detrimental can have a significant positive impact for handling them.

Comments: 6 pages, 5 tables, 2 figures. arXiv admin note: substantial text overlap with arXiv:1403.1893
Categories: stat.ML, cs.LG
Related articles: Most relevant | Search more
arXiv:2006.12880 [stat.ML] (Published 2020-06-23)
ABID: Angle Based Intrinsic Dimensionality
arXiv:1706.07450 [stat.ML] (Published 2017-06-22)
A Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks
arXiv:2301.09633 [stat.ML] (Published 2023-01-23)
Prediction-Powered Inference