arXiv Analytics

Sign in

arXiv:2405.17085 [math.OC]AbstractReferencesReviewsResources

Inverse reinforcement learning by expert imitation for the stochastic linear-quadratic optimal control problem

Zhongshi Sun, Guangyan Jia

Published 2024-05-27Version 1

This article studies inverse reinforcement learning (IRL) for the stochastic linear-quadratic optimal control problem, where two agents are considered. A learner agent does not know the expert agent's performance cost function, but it imitates the behavior of the expert agent by constructing an underlying cost function that obtains the same optimal feedback control as the expert's. We first develop a model-based IRL algorithm, which consists of a policy correction and a policy update from the policy iteration in reinforcement learning, as well as a cost function weight reconstruction based on the inverse optimal control. Then, under this scheme, we propose a model-free off-policy IRL algorithm, which does not need to know or identify the system and only needs to collect the behavior data of the expert agent and learner agent once during the iteration process. Moreover, the proofs of the algorithm's convergence, stability, and non-unique solutions are given. Finally, a simulation example is provided to verify the effectiveness of the proposed algorithm.

Related articles: Most relevant | Search more
arXiv:2405.15509 [math.OC] (Published 2024-05-24)
Randomized algorithms and PAC bounds for inverse reinforcement learning in continuous spaces
arXiv:1706.04316 [math.OC] (Published 2017-06-14)
A Class of Discrete-time Mean-field Stochastic Linear-quadratic Optimal Control Problems with Financial Application
arXiv:1806.05215 [math.OC] (Published 2018-06-13)
Weak Closed-Loop Solvability of Stochastic Linear-Quadratic Optimal Control Problems