arXiv Analytics

Sign in

arXiv:1512.04011 [cs.LG]AbstractReferencesReviewsResources

L1-Regularized Distributed Optimization: A Communication-Efficient Primal-Dual Framework

Virginia Smith, Simone Forte, Michael I. Jordan, Martin Jaggi

Published 2015-12-13Version 1

Despite the importance of sparsity in many big data applications, there are few existing methods for efficient distributed optimization of sparsely-regularized objectives. In this paper, we present a communication-efficient framework for L1-regularized optimization in distributed environments. By taking a non-traditional view of classical objectives as part of a more general primal-dual setting, we obtain a new class of methods that can be efficiently distributed and is applicable to common L1-regularized regression and classification objectives, such as Lasso, sparse logistic regression, and elastic net regression. We provide convergence guarantees for this framework and demonstrate strong empirical performance as compared to other state-of-the-art methods on several real-world distributed datasets.

Related articles: Most relevant | Search more
arXiv:2403.11395 [cs.LG] (Published 2024-03-18)
Automated data processing and feature engineering for deep learning and big data applications: a survey
arXiv:2104.13968 [cs.LG] (Published 2021-04-28)
Tail-Net: Extracting Lowest Singular Triplets for Big Data Applications
arXiv:2307.01601 [cs.LG] (Published 2023-07-04)
Prototypes as Explanation for Time Series Anomaly Detection