arXiv Analytics

Sign in

arXiv:2106.06251 [stat.ML]AbstractReferencesReviewsResources

On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting

Shunta Akiyama, Taiji Suzuki

Published 2021-06-11Version 1

Deep learning empirically achieves high performance in many applications, but its training dynamics has not been fully understood theoretically. In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. We show that with a specific regularization and sufficient over-parameterization, the student network can identify the parameters of the teacher network with high probability via gradient descent with a norm dependent stepsize even though the objective function is highly non-convex. The key theoretical tool is the measure representation of the neural networks and a novel application of a dual certificate argument for sparse estimation on a measure space. We analyze the global minima and global convergence property in the measure space.

Related articles: Most relevant | Search more
arXiv:2205.14818 [stat.ML] (Published 2022-05-30)
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods
arXiv:2001.06892 [stat.ML] (Published 2020-01-19)
Optimal Rate of Convergence for Deep Neural Network Classifiers under the Teacher-Student Setting
arXiv:1910.08280 [stat.ML] (Published 2019-10-18)
Robust modal regression with direct log-density derivative estimation