arXiv Analytics

Sign in

arXiv:1908.09963 [math.OC]AbstractReferencesReviewsResources

Deep-Learning Based Linear Average Consensus for Faster Convergence over Temporal Network

Masako Kishida, Masaki Ogura, Tadashi Wadayama

Published 2019-08-27Version 1

In this paper, we study the problem of accelerating the linear average consensus algorithm over complex networks. We specifically present a data-driven methodology for tuning the weights of temporal (i.e., time-varying) networks by using deep learning techniques. We first unfold the linear average consensus protocol to obtain a feedforward signal flow graph, which we regard as a neural network. We then train the neural network by using standard deep learning technique to minimize the consensus error over a given finite time-horizon. As a result of the training, we obtain a set of optimized time-varying weights for faster consensus in the network. Numerical simulations are presented to show that our methodology can achieve a significantly smaller consensus error than the static optimal strategy.

Related articles: Most relevant | Search more
arXiv:2210.05995 [math.OC] (Published 2022-10-12)
SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization
arXiv:1709.00982 [math.OC] (Published 2017-09-04)
Faster Convergence of a Randomized Coordinate Descent Method for Linearly Constrained Optimization Problems
arXiv:1806.04207 [math.OC] (Published 2018-06-11)
Swarming for Faster Convergence in Stochastic Optimization