arXiv Analytics

Sign in

arXiv:2209.07263 [cs.LG]AbstractReferencesReviewsResources

Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)

Zhenyu Zhu, Fanghui Liu, Grigorios G Chrysos, Volkan Cevher

Published 2022-09-15Version 1

We study the average robustness notion in deep neural networks in (selected) wide and narrow, deep and shallow, as well as lazy and non-lazy training settings. We prove that in the under-parameterized setting, width has a negative effect while it improves robustness in the over-parameterized setting. The effect of depth closely depends on the initialization and the training mode. In particular, when initialized with LeCun initialization, depth helps robustness with lazy training regime. In contrast, when initialized with Neural Tangent Kernel (NTK) and He-initialization, depth hurts the robustness. Moreover, under non-lazy training regime, we demonstrate how the width of a two-layer ReLU network benefits robustness. Our theoretical developments improve the results by Huang et al. [2021], Wu et al. [2021] and are consistent with Bubeck and Sellke [2021], Bubeck et al. [2021].

Related articles: Most relevant | Search more
arXiv:1801.07648 [cs.LG] (Published 2018-01-23)
Clustering with Deep Learning: Taxonomy and New Methods
arXiv:1712.04301 [cs.LG] (Published 2017-12-09)
Deep Learning for IoT Big Data and Streaming Analytics: A Survey
arXiv:1710.10686 [cs.LG] (Published 2017-10-29)
Regularization for Deep Learning: A Taxonomy