arXiv Analytics

Sign in

arXiv:1605.02105 [math.OC]AbstractReferencesReviewsResources

Distributed Learning with Infinitely Many Hypotheses

Angelia Nedić, Alex Olshevsky, César Uribe

Published 2016-05-06Version 1

We consider a distributed learning setup where a network of agents sequentially access realizations of a set of random variables with unknown distributions. The network objective is to find a parametrized distribution that best describes their joint observations in the sense of the Kullback-Leibler divergence. Apart from recent efforts in the literature, we analyze the case of countably many hypotheses and the case of a continuum of hypotheses. We provide non-asymptotic bounds for the concentration rate of the agents' beliefs around the correct hypothesis in terms of the number of agents, the network parameters, and the learning abilities of the agents. Additionally, we provide a novel motivation for a general set of distributed Non-Bayesian update rules as instances of the distributed stochastic mirror descent algorithm.

Related articles: Most relevant | Search more
arXiv:1902.11163 [math.OC] (Published 2019-02-26)
On Maintaining Linear Convergence of Distributed Learning and Optimization under Limited Communication
arXiv:1704.02718 [math.OC] (Published 2017-04-10)
Distributed Learning for Cooperative Inference
arXiv:1806.06573 [math.OC] (Published 2018-06-18)
Distributed learning with compressed gradients