{ "id": "1805.09545", "version": "v1", "published": "2018-05-24T08:28:01.000Z", "updated": "2018-05-24T08:28:01.000Z", "title": "On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport", "authors": [ "Lenaic Chizat", "Francis Bach" ], "categories": [ "math.OC", "cs.NE", "stat.ML" ], "abstract": "Many tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension.", "revisions": [ { "version": "v1", "updated": "2018-05-24T08:28:01.000Z" } ], "analyses": { "keywords": [ "global convergence", "over-parameterized models", "continuous-time gradient descent", "single hidden layer", "sparse spikes deconvolution" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }