arXiv Analytics

Sign in

arXiv:1906.11392 [math.OC]AbstractReferencesReviewsResources

From self-tuning regulators to reinforcement learning and back again

Nikolai Matni, Alexandre Proutiere, Anders Rantzer, Stephen Tu

Published 2019-06-27Version 1

Machine and reinforcement learning (RL) are being applied to plan and control the behavior of autonomous systems interacting with the physical world -- examples include self-driving vehicles, distributed sensor networks, and agile robots. However, if machine learning is to be applied in these new settings, the resulting algorithms must come with the reliability, robustness, and safety guarantees that are hallmarks of the control theory literature, as failures could be catastrophic. Thus, as RL algorithms are increasingly and more aggressively deployed in safety critical settings, it is imperative that control theorists be part of the conversation. The goal of this tutorial paper is to provide a jumping off point for control theorists wishing to work on RL related problems by covering recent advances in bridging learning and control theory, and by placing these results within the appropriate historical context of the system identification and adaptive control literatures.

Comments: Tutorial paper submitted to 2020 IEEE Conference on Decision and Control
Categories: math.OC, cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1906.11395 [math.OC] (Published 2019-06-27)
A Tutorial on Concentration Bounds for System Identification
arXiv:2003.02894 [math.OC] (Published 2020-03-05)
Distributional Robustness and Regularization in Reinforcement Learning
arXiv:2402.08306 [math.OC] (Published 2024-02-13, updated 2024-07-29)
Reinforcement Learning for Docking Maneuvers with Prescribed Performance