arXiv Analytics

Sign in

arXiv:1704.02718 [math.OC]AbstractReferencesReviewsResources

Distributed Learning for Cooperative Inference

Angelia Nedić, Alex Olshevsky, César A. Uribe

Published 2017-04-10Version 1

We study the problem of cooperative inference where a group of agents interact over a network and seek to estimate a joint parameter that best explains a set of observations. Agents do not know the network topology or the observations of other agents. We explore a variational interpretation of the Bayesian posterior density, and its relation to the stochastic mirror descent algorithm, to propose a new distributed learning algorithm. We show that, under appropriate assumptions, the beliefs generated by the proposed algorithm concentrate around the true parameter exponentially fast. We provide explicit non-asymptotic bounds for the convergence rate. Moreover, we develop explicit and computationally efficient algorithms for observation models belonging to exponential families.

Related articles: Most relevant | Search more
arXiv:1605.02105 [math.OC] (Published 2016-05-06)
Distributed Learning with Infinitely Many Hypotheses
arXiv:2407.05863 [math.OC] (Published 2024-07-08)
Almost Sure Convergence and Non-asymptotic Concentration Bounds for Stochastic Mirror Descent Algorithm
arXiv:1902.11163 [math.OC] (Published 2019-02-26)
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication