arXiv Analytics

Sign in

arXiv:2211.14972 [math.OC]AbstractReferencesReviewsResources

On Separation Between Learning and Control in Partially Observed Markov Decision Processes

Andreas A. Malikopoulos

Published 2022-11-28Version 1

Cyber-physical systems (CPS) encounter a large volume of data which is added to the system gradually in real time and not altogether in advance. As the volume of data increases, the domain of the control strategies also increases, and thus it becomes challenging to search for an optimal strategy. Even if an optimal control strategy is found, implementing such strategies with increasing domains is burdensome. To derive an optimal control strategy in CPS, we typically assume an ideal model of the system. Such model-based control approaches cannot effectively facilitate optimal solutions with performance guarantees due to the discrepancy between the model and the actual CPS. Alternatively, traditional supervised learning approaches cannot always facilitate robust solutions using data derived offline. Similarly, applying reinforcement learning approaches directly to the actual CPS might impose significant implications on safety and robust operation of the system. The goal of this chapter is to provide a theoretical framework that aims at separating the control and learning tasks which allows us to combine offline model-based control with online learning approaches, and thus circumvent the challenges in deriving optimal control strategies for CPS.

Comments: 18 pages, 5 figures. arXiv admin note: text overlap with arXiv:2101.10992
Categories: math.OC
Related articles: Most relevant | Search more
arXiv:2012.09417 [math.OC] (Published 2020-12-17)
A Note on Optimization Formulations of Markov Decision Processes
arXiv:2107.06379 [math.OC] (Published 2021-07-13)
Separation of Learning and Control for Cyber-Physical Systems
arXiv:2207.00885 [math.OC] (Published 2022-07-02)
Reinforcement Learning Approaches for the Orienteering Problem with Stochastic and Dynamic Release Dates