arXiv Analytics

Sign in

arXiv:2406.14951 [cs.LG]AbstractReferencesReviewsResources

An Idiosyncrasy of Time-discretization in Reinforcement Learning

Kris De Asis, Richard S. Sutton

Published 2024-06-21Version 1

Many reinforcement learning algorithms are built on an assumption that an agent interacts with an environment over fixed-duration, discrete time steps. However, physical systems are continuous in time, requiring a choice of time-discretization granularity when digitally controlling them. Furthermore, such systems do not wait for decisions to be made before advancing the environment state, necessitating the study of how the choice of discretization may affect a reinforcement learning algorithm. In this work, we consider the relationship between the definitions of the continuous-time and discrete-time returns. Specifically, we acknowledge an idiosyncrasy with naively applying a discrete-time algorithm to a discretized continuous-time environment, and note how a simple modification can better align the return definitions. This observation is of practical consideration when dealing with environments where time-discretization granularity is a choice, or situations where such granularity is inherently stochastic.

Related articles: Most relevant | Search more
arXiv:2006.13900 [cs.LG] (Published 2020-06-24)
Quantifying Differences in Reward Functions
arXiv:1407.5358 [cs.LG] (Published 2014-07-21)
Practical Kernel-Based Reinforcement Learning
arXiv:cs/0504063 [cs.LG] (Published 2005-04-14)
Selection in Scale-Free Small World