arXiv Analytics

Sign in

arXiv:2406.08847 [cs.GT]AbstractReferencesReviewsResources

Roping in Uncertainty: Robustness and Regularization in Markov Games

Jeremy McMahan, Giovanni Artiglio, Qiaomin Xie

Published 2024-06-13Version 1

We study robust Markov games (RMG) with $s$-rectangular uncertainty. We show a general equivalence between computing a robust Nash equilibrium (RNE) of a $s$-rectangular RMG and computing a Nash equilibrium (NE) of an appropriately constructed regularized MG. The equivalence result yields a planning algorithm for solving $s$-rectangular RMGs, as well as provable robustness guarantees for policies computed using regularized methods. However, we show that even for just reward-uncertain two-player zero-sum matrix games, computing an RNE is PPAD-hard. Consequently, we derive a special uncertainty structure called efficient player-decomposability and show that RNE for two-player zero-sum RMG in this class can be provably solved in polynomial time. This class includes commonly used uncertainty sets such as $L_1$ and $L_\infty$ ball uncertainty sets.

Related articles: Most relevant | Search more
arXiv:2102.10156 [cs.GT] (Published 2021-02-19)
Learning to Persuade on the Fly: Robustness Against Ignorance
arXiv:2505.05211 [cs.GT] (Published 2025-05-08)
Incentive-Aware Machine Learning; Robustness, Fairness, Improvement & Causality
arXiv:2305.08125 [cs.GT] (Published 2023-05-14)
Robustness of Participatory Budgeting Outcomes: Complexity and Experiments