arXiv Analytics

Sign in

arXiv:2104.06574 [cs.LG]AbstractReferencesReviewsResources

Joint Negative and Positive Learning for Noisy Labels

Youngdong Kim, Juseung Yun, Hyounguk Shon, Junmo Kim

Published 2021-04-14Version 1

Training of Convolutional Neural Networks (CNNs) with data with noisy labels is known to be a challenge. Based on the fact that directly providing the label to the data (Positive Learning; PL) has a risk of allowing CNNs to memorize the contaminated labels for the case of noisy data, the indirect learning approach that uses complementary labels (Negative Learning for Noisy Labels; NLNL) has proven to be highly effective in preventing overfitting to noisy data as it reduces the risk of providing faulty target. NLNL further employs a three-stage pipeline to improve convergence. As a result, filtering noisy data through the NLNL pipeline is cumbersome, increasing the training cost. In this study, we propose a novel improvement of NLNL, named Joint Negative and Positive Learning (JNPL), that unifies the filtering pipeline into a single stage. JNPL trains CNN via two losses, NL+ and PL+, which are improved upon NL and PL loss functions, respectively. We analyze the fundamental issue of NL loss function and develop new NL+ loss function producing gradient that enhances the convergence of noisy data. Furthermore, PL+ loss function is designed to enable faster convergence to expected-to-be-clean data. We show that the NL+ and PL+ train CNN simultaneously, significantly simplifying the pipeline, allowing greater ease of practical use compared to NLNL. With a simple semi-supervised training technique, our method achieves state-of-the-art accuracy for noisy data classification based on the superior filtering ability.

Related articles: Most relevant | Search more
arXiv:1908.07387 [cs.LG] (Published 2019-08-19)
NLNL: Negative Learning for Noisy Labels
arXiv:2206.13140 [cs.LG] (Published 2022-06-27)
Compressing Features for Learning with Noisy Labels
arXiv:2007.05305 [cs.LG] (Published 2020-07-10)
ExpertNet: Adversarial Learning and Recovery Against Noisy Labels