arXiv Analytics

Sign in

arXiv:2406.05086 [math.OC]AbstractReferencesReviewsResources

Robust Reward Design for Markov Decision Processes

Shuo Wu, Haoxiang Ma, Jie Fu, Shuo Han

Published 2024-06-07Version 1

The problem of reward design examines the interaction between a leader and a follower, where the leader aims to shape the follower's behavior to maximize the leader's payoff by modifying the follower's reward function. Current approaches to reward design rely on an accurate model of how the follower responds to reward modifications, which can be sensitive to modeling inaccuracies. To address this issue of sensitivity, we present a solution that offers robustness against uncertainties in modeling the follower, including 1) how the follower breaks ties in the presence of nonunique best responses, 2) inexact knowledge of how the follower perceives reward modifications, and 3) bounded rationality of the follower. Our robust solution is guaranteed to exist under mild conditions and can be obtained numerically by solving a mixed-integer linear program. Numerical experiments on multiple test cases demonstrate that our solution improves robustness compared to the standard approach without incurring significant additional computing costs.

Related articles: Most relevant | Search more
arXiv:1202.6259 [math.OC] (Published 2012-02-28)
A distance for probability spaces, and long-term values in Markov Decision Processes and Repeated Games
arXiv:2412.12879 [math.OC] (Published 2024-12-17)
Robust Deterministic Policies for Markov Decision Processes under Budgeted Uncertainty
arXiv:1310.7906 [math.OC] (Published 2013-10-29, updated 2015-08-04)
Convergence Analysis of the Approximate Newton Method for Markov Decision Processes