arXiv Analytics

Sign in

arXiv:2012.01474 [cs.LG]AbstractReferencesReviewsResources

Second-Order Guarantees in Federated Learning

Stefan Vlaski, Elsa Rizk, Ali H. Sayed

Published 2020-12-02Version 1

Federated learning is a useful framework for centralized learning from distributed data under practical considerations of heterogeneity, asynchrony, and privacy. Federated architectures are frequently deployed in deep learning settings, which generally give rise to non-convex optimization problems. Nevertheless, most existing analysis are either limited to convex loss functions, or only establish first-order stationarity, despite the fact that saddle-points, which are first-order stationary, are known to pose bottlenecks in deep learning. We draw on recent results on the second-order optimality of stochastic gradient algorithms in centralized and decentralized settings, and establish second-order guarantees for a class of federated learning algorithms.

Related articles: Most relevant | Search more
arXiv:2009.06303 [cs.LG] (Published 2020-09-14)
Fed+: A Family of Fusion Algorithms for Federated Learning
arXiv:1812.11750 [cs.LG] (Published 2018-12-31)
Federated Learning via Over-the-Air Computation
arXiv:2003.08673 [cs.LG] (Published 2020-03-19)
Survey of Personalization Techniques for Federated Learning