arXiv Analytics

Sign in

arXiv:1601.00667 [math.PR]AbstractReferencesReviewsResources

Reinforcement learning in social networks

Daniel Kious, Pierre Tarrès

Published 2016-01-04Version 1

We propose a model of network formation based on reinforcement learning, which can be seen as a generalization as the one proposed by Skyrms for signaling games. On a discrete graph, whose vertices represent individuals, at any time step each of them picks one of its neighbors with a probability proportional to their past number of communications; independently, Nature chooses, with an independent identical distribution in time, which ones are allowed to communicate. Communications occur when any two neighbors mutually pick each other and are both allowed by Nature to communicate. Our results generalize the ones obtained by Hu, Skyrms and Tarr\`es. We prove that, up to an error term, the expected rate of communications increases in average, and thus a.s. converges. If we define the limit graph as the non-oriented subgraph on which edges are pairs of vertices communicating infinitely often, then, for stable configurations of the dynamics outside the boundary, the connected components of this limit graph are star-shaped. Conversely, any graph correspondence satisfying that property and a certain balance condition, and within which every vertex is connected to at least another one, is a limit configuration with positive probability.

Related articles: Most relevant | Search more
arXiv:2102.11768 [math.PR] (Published 2021-02-23)
Robust Naive Learning in Social Networks
arXiv:math/0404106 [math.PR] (Published 2004-04-05)
Network formation by reinforcement learning: the long and medium run
arXiv:2010.04820 [math.PR] (Published 2020-10-09)
Finding geodesics on graphs using reinforcement learning