arXiv Analytics

Sign in

arXiv:2105.14035 [stat.ML]AbstractReferencesReviewsResources

DeepMoM: Robust Deep Learning With Median-of-Means

Shih-Ting Huang, Johannes Lederer

Published 2021-05-28Version 1

Data used in deep learning is notoriously problematic. For example, data are usually combined from diverse sources, rarely cleaned and vetted thoroughly, and sometimes corrupted on purpose. Intentional corruption that targets the weak spots of algorithms has been studied extensively under the label of "adversarial attacks." In contrast, the arguably much more common case of corruption that reflects the limited quality of data has been studied much less. Such "random" corruptions are due to measurement errors, unreliable sources, convenience sampling, and so forth. These kinds of corruption are common in deep learning, because data are rarely collected according to strict protocols -- in strong contrast to the formalized data collection in some parts of classical statistics. This paper concerns such corruption. We introduce an approach motivated by very recent insights into median-of-means and Le Cam's principle, we show that the approach can be readily implemented, and we demonstrate that it performs very well in practice. In conclusion, we believe that our approach is a very promising alternative to standard parameter training based on least-squares and cross-entropy loss.

Related articles: Most relevant | Search more
arXiv:2106.06097 [stat.ML] (Published 2021-06-11)
Neural Optimization Kernel: Towards Robust Deep Learning
arXiv:1805.10652 [stat.ML] (Published 2018-05-27)
Defending Against Adversarial Attacks by Leveraging an Entire GAN
arXiv:2206.03353 [stat.ML] (Published 2022-06-07)
Adaptive Regularization for Adversarial Training