{ "id": "1803.02922", "version": "v1", "published": "2018-03-08T00:19:11.000Z", "updated": "2018-03-08T00:19:11.000Z", "title": "Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit", "authors": [ "Partha P Mitra" ], "comment": "Submitted to EUSIPCO 2018", "categories": [ "stat.ML", "cs.LG" ], "abstract": "Modern supervised learning techniques, particularly those using so called deep nets, involve fitting high dimensional labelled data sets with functions containing very large numbers of parameters. Much of this work is empirical, and interesting phenomena have been observed that require theoretical explanations, however the non-convexity of the loss functions complicates the analysis. Recently it has been proposed that some of the success of these techniques resides in the effectiveness of the simple stochastic gradient descent algorithm in the so called interpolation limit in which all labels are fit perfectly. This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function. Here we exploit this insight by analyzing a distributed algorithm for gradient descent, also in the interpolating limit. The algorithm corresponds to gradient descent applied to a simple penalized distributed loss function, $L({\\bf w}_1,...,{\\bf w}_n) = \\Sigma_i l_i({\\bf w}_i) + \\mu \\sum_{}|{\\bf w}_i-{\\bf w}_j|^2$. Here each node is allowed its own parameter vector and $$ denotes edges of a connected graph defining the communication links between nodes. It is shown that this distributed algorithm converges linearly (ie the error reduces exponentially with iteration number), with a rate $1-\\frac{\\eta}{n}\\lambda_{min}(H)