arXiv Analytics

Sign in

arXiv:1801.04701 [cs.LG]AbstractReferencesReviewsResources

tau-FPL: Tolerance-Constrained Learning in Linear Time

Ao Zhang, Nan Li, Jian Pu, Jun Wang, Junchi Yan, Hongyuan Zha

Published 2018-01-15Version 1

Learning a classifier with control on the false-positive rate plays a critical role in many machine learning applications. Existing approaches either introduce prior knowledge dependent label cost or tune parameters based on traditional classifiers, which lack consistency in methodology because they do not strictly adhere to the false-positive rate constraint. In this paper, we propose a novel scoring-thresholding approach, tau-False Positive Learning (tau-FPL) to address this problem. We show the scoring problem which takes the false-positive rate tolerance into accounts can be efficiently solved in linear time, also an out-of-bootstrap thresholding method can transform the learned ranking function into a low false-positive classifier. Both theoretical analysis and experimental results show superior performance of the proposed tau-FPL over existing approaches.

Comments: 32 pages, 3 figures. This is an extended version of our paper published in AAAI-18
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:1705.08475 [cs.LG] (Published 2017-05-23)
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
arXiv:1902.02948 [cs.LG] (Published 2019-02-08)
EILearn: Learning Incrementally Using Previous Knowledge Obtained From an Ensemble of Classifiers
arXiv:1910.08103 [cs.LG] (Published 2019-10-17)
Mapper Based Classifier