arXiv Analytics

Sign in

arXiv:1909.09338 [cs.LG]AbstractReferencesReviewsResources

A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels

Yucen Luo, Jun Zhu, Tomas Pfister

Published 2019-09-20Version 1

Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. To mitigate this issue, we propose a simple but effective baseline that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels. Experiments on both synthetically generated incorrect labels and realistic large-scale noisy datasets demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.

Related articles: Most relevant | Search more
arXiv:1812.03699 [cs.LG] (Published 2018-12-10)
Taxi Demand-Supply Forecasting: Impact of Spatial Partitioning on the Performance of Neural Networks
arXiv:1811.09054 [cs.LG] (Published 2018-11-22)
Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections
arXiv:1902.04205 [cs.LG] (Published 2019-02-12)
Improving learnability of neural networks: adding supplementary axes to disentangle data representation