{ "id": "1407.2724", "version": "v2", "published": "2014-07-10T08:25:49.000Z", "updated": "2015-06-13T15:14:47.000Z", "title": "On the Optimality of Averaging in Distributed Statistical Learning", "authors": [ "Jonathan Rosenblatt", "Boaz Nadler" ], "comment": "Major changes from previous version. Particularly on the second order error approximation and implications", "categories": [ "stat.ML", "math.ST", "stat.TH" ], "abstract": "A common approach to statistical learning with big-data is to randomly split it among $m$ machines and learn the parameter of interest by averaging the $m$ individual estimates. In this paper, focusing on empirical risk minimization, or equivalently M-estimation, we study the statistical error incurred by this strategy. We consider two large-sample settings: First, a classical setting where the number of parameters $p$ is fixed, and the number of samples per machine $n\\to\\infty$. Second, a high-dimensional regime where both $p,n\\to\\infty$ with $p/n \\to \\kappa \\in (0,1)$. For both regimes and under suitable assumptions, we present asymptotically exact expressions for this estimation error. In the fixed-$p$ setting, under suitable assumptions, we prove that to leading order averaging is as accurate as the centralized solution. We also derive the second order error terms, and show that these can be non-negligible, notably for non-linear models. The high-dimensional setting, in contrast, exhibits a qualitatively different behavior: data splitting incurs a first-order accuracy loss, which to leading order increases linearly with the number of machines. The dependence of our error approximations on the number of machines traces an interesting accuracy-complexity tradeoff, allowing the practitioner an informed choice on the number of machines to deploy. Finally, we confirm our theoretical analysis with several simulations.", "revisions": [ { "version": "v1", "updated": "2014-07-10T08:25:49.000Z", "abstract": "A common approach to statistical learning on big data is to randomly distribute it among $m$ machines and calculate the parameter of interest by merging their $m$ individual estimates. Two key questions related to this approach are: What is the optimal aggregation procedure, and what is the accuracy loss in comparison to centralized computation. We make several contributions to these questions, under the general framework of empirical risk minimization, a.k.a. M-estimation. As data is abundant, we assume the number of samples per machine, $n$, is large and study two asymptotic settings: one where $n \\to \\infty$ but the number of estimated parameters $p$ is fixed, and a second high-dimensional case where both $p,n\\to\\infty$ with $p/n \\to \\kappa \\in (0,1)$. Our main results include asymptotically exact expressions for the loss incurred by splitting the data, where only bounds were previously available. These are derived independently of the learning algorithm. Consequently, under suitable assumptions in the fixed-$p$ setting, averaging is {\\em first-order equivalent} to a centralized solution, and thus inherits statistical properties like efficiency and robustness. In the high-dimension setting, studied here for the first time in the context of parallelization, a qualitatively different behaviour appears. Parallelized computation generally incurs an accuracy loss, for which we derive a simple approximate formula. We conclude with several practical implications of our results.", "comment": null, "journal": null, "doi": null }, { "version": "v2", "updated": "2015-06-13T15:14:47.000Z" } ], "analyses": { "keywords": [ "distributed statistical learning", "optimality", "accuracy loss", "simple approximate formula", "second high-dimensional case" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable", "adsabs": "2014arXiv1407.2724R" } } }