arXiv Analytics

Sign in

arXiv:2209.14089 [cond-mat.stat-mech]AbstractReferencesReviewsResources

Reinforcement Learning with Tensor Networks: Application to Dynamical Large Deviations

Edward Gillman, Dominic C. Rose, Juan P. Garrahan

Published 2022-09-28Version 1

We present a framework to integrate tensor network (TN) methods with reinforcement learning (RL) for solving dynamical optimisation tasks. We consider the RL actor-critic method, a model-free approach for solving RL problems, and introduce TNs as the approximators for its policy and value functions. Our "actor-critic with tensor networks" (ACTeN) method is especially well suited to problems with large and factorisable state and action spaces. As an illustration of the applicability of ACTeN we solve the exponentially hard task of sampling rare trajectories in two paradigmatic stochastic models, the East model of glasses and the asymmetric simple exclusion process (ASEP), the latter being particularly challenging to other methods due to the absence of detailed balance. With substantial potential for further integration with the vast array of existing RL methods, the approach introduced here is promising both for applications in physics and to multi-agent RL problems more generally.

Comments: Combined main text of 6 pages, 3 figures and supplemental materials of 7 pages, 1 figure
Related articles: Most relevant | Search more
arXiv:cond-mat/0507525 (Published 2005-07-22, updated 2006-01-03)
An algorithm for counting circuits: application to real-world and random graphs
arXiv:cond-mat/0411450 (Published 2004-11-17)
New Application of Functional Integrals to Classical Mechanics
arXiv:0808.4160 [cond-mat.stat-mech] (Published 2008-08-29)
Using Relative Entropy to Find Optimal Approximations: an Application to Simple Fluids