{ "id": "2106.06251", "version": "v1", "published": "2021-06-11T09:05:41.000Z", "updated": "2021-06-11T09:05:41.000Z", "title": "On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting", "authors": [ "Shunta Akiyama", "Taiji Suzuki" ], "comment": "47 pages, 3 figures", "categories": [ "stat.ML", "cs.LG" ], "abstract": "Deep learning empirically achieves high performance in many applications, but its training dynamics has not been fully understood theoretically. In this paper, we explore theoretical analysis on training two-layer ReLU neural networks in a teacher-student regression model, in which a student network learns an unknown teacher network through its outputs. We show that with a specific regularization and sufficient over-parameterization, the student network can identify the parameters of the teacher network with high probability via gradient descent with a norm dependent stepsize even though the objective function is highly non-convex. The key theoretical tool is the measure representation of the neural networks and a novel application of a dual certificate argument for sparse estimation on a measure space. We analyze the global minima and global convergence property in the measure space.", "revisions": [ { "version": "v1", "updated": "2021-06-11T09:05:41.000Z" } ], "analyses": { "keywords": [ "two-layer relu neural networks", "gradient method", "empirically achieves high performance", "teacher-student setting", "learning empirically achieves high" ], "note": { "typesetting": "TeX", "pages": 47, "language": "en", "license": "arXiv", "status": "editable" } } }