arXiv:1908.09963 [math.OC]AbstractReferencesReviewsResources
Deep-Learning Based Linear Average Consensus for Faster Convergence over Temporal Network
Masako Kishida, Masaki Ogura, Tadashi Wadayama
Published 2019-08-27Version 1
In this paper, we study the problem of accelerating the linear average consensus algorithm over complex networks. We specifically present a data-driven methodology for tuning the weights of temporal (i.e., time-varying) networks by using deep learning techniques. We first unfold the linear average consensus protocol to obtain a feedforward signal flow graph, which we regard as a neural network. We then train the neural network by using standard deep learning technique to minimize the consensus error over a given finite time-horizon. As a result of the training, we obtain a set of optimized time-varying weights for faster consensus in the network. Numerical simulations are presented to show that our methodology can achieve a significantly smaller consensus error than the static optimal strategy.