arXiv Analytics

Sign in

arXiv:2308.11546 [math.OC]AbstractReferencesReviewsResources

Risk-Minimizing Two-Player Zero-Sum Stochastic Differential Game via Path Integral Control

Apurva Patil, Yujing Zhou, David Fridovich-Keil, Takashi Tanaka

Published 2023-08-22Version 1

This paper addresses a continuous-time risk-minimizing two-player zero-sum stochastic differential game (SDG), in which each player aims to minimize its probability of failure. Failure occurs in the event when the state of the game enters into predefined undesirable domains, and one player's failure is the other's success. We derive a sufficient condition for this game to have a saddle-point equilibrium and show that it can be solved via a Hamilton-Jacobi-Isaacs (HJI) partial differential equation (PDE) with Dirichlet boundary condition. Under certain assumptions on the system dynamics and cost function, we establish the existence and uniqueness of the saddle-point of the game. We provide explicit expressions for the saddle-point policies which can be numerically evaluated using path integral control. This allows us to solve the game online via Monte Carlo sampling of system trajectories. We implement our control synthesis framework on two classes of risk-minimizing zero-sum SDGs: a disturbance attenuation problem and a pursuit-evasion game. Simulation studies are presented to validate the proposed control synthesis framework.

Related articles: Most relevant | Search more
arXiv:1406.4026 [math.OC] (Published 2014-06-16, updated 2015-02-11)
Path Integral Control and State Dependent Feedback
arXiv:2007.03960 [math.OC] (Published 2020-07-08)
On Entropic Optimization and Path Integral Control
arXiv:1309.3615 [math.OC] (Published 2013-09-14, updated 2014-09-17)
Implicit sampling for path integral control, Monte Carlo localization, and SLAM