arXiv Analytics

Sign in

arXiv:2009.06202 [cs.LG]AbstractReferencesReviewsResources

Risk Bounds for Robust Deep Learning

Johannes Lederer

Published 2020-09-14Version 1

It has been observed that certain loss functions can render deep-learning pipelines robust against flaws in the data. In this paper, we support these empirical findings with statistical theory. We especially show that empirical-risk minimization with unbounded, Lipschitz-continuous loss functions, such as the least-absolute deviation loss, Huber loss, Cauchy loss, and Tukey's biweight loss, can provide efficient prediction under minimal assumptions on the data. More generally speaking, our paper provides theoretical evidence for the benefits of robust loss functions in deep learning.

Related articles: Most relevant | Search more
arXiv:1910.13886 [cs.LG] (Published 2019-10-30)
Risk bounds for reservoir computing
arXiv:2103.15569 [cs.LG] (Published 2021-03-29)
Risk Bounds for Learning via Hilbert Coresets
arXiv:1803.09050 [cs.LG] (Published 2018-03-24, updated 2018-06-08)
Learning to Reweight Examples for Robust Deep Learning