arXiv Analytics

Sign in

arXiv:1809.07435 [cs.LG]AbstractReferencesReviewsResources

Predicting Periodicity with Temporal Difference Learning

Kristopher De Asis, Brendan Bennett, Richard S. Sutton

Published 2018-09-20Version 1

Temporal difference (TD) learning is an important approach in reinforcement learning, as it combines ideas from dynamic programming and Monte Carlo methods in a way that allows for online and incremental model-free learning. A key idea of TD learning is that it is learning predictive knowledge about the environment in the form of value functions, from which it can derive its behavior to address long-term sequential decision making problems. The agent's horizon of interest, that is, how immediate or long-term a TD learning agent predicts into the future, is adjusted through a discount rate parameter. In this paper, we introduce an alternative view on the discount rate, with insight from digital signal processing, to include complex-valued discounting. Our results show that setting the discount rate to appropriately chosen complex numbers allows for online and incremental estimation of the Discrete Fourier Transform (DFT) of a signal of interest with TD learning. We thereby extend the types of knowledge representable by value functions, which we show are particularly useful for identifying periodic effects in the reward sequence.

Related articles: Most relevant | Search more
arXiv:1907.05634 [cs.LG] (Published 2019-07-12)
Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling
arXiv:2306.09746 [cs.LG] (Published 2023-06-16)
Temporal Difference Learning with Experience Replay
arXiv:2203.04955 [cs.LG] (Published 2022-03-09)
Temporal Difference Learning for Model Predictive Control