arXiv Analytics

Sign in

arXiv:2301.11270 [cs.LG]AbstractReferencesReviewsResources

Principled Reinforcement Learning with Human Feedback from Pairwise or $K$-wise Comparisons

Banghua Zhu, Jiantao Jiao, Michael I. Jordan

Published 2023-01-26Version 1

We provide a theoretical framework for Reinforcement Learning with Human Feedback (RLHF). Our analysis shows that when the true reward function is linear, the widely used maximum likelihood estimator (MLE) converges under both the Bradley-Terry-Luce (BTL) model and the Plackett-Luce (PL) model. However, we show that when training a policy based on the learned reward model, MLE fails while a pessimistic MLE provides policies with improved performance under certain coverage assumptions. Additionally, we demonstrate that under the PL model, the true MLE and an alternative MLE that splits the $K$-wise comparison into pairwise comparisons both converge. Moreover, the true MLE is asymptotically more efficient. Our results validate the empirical success of existing RLHF algorithms in InstructGPT and provide new insights for algorithm design. Furthermore, our results unify the problem of RLHF and Max Entropy Inverse Reinforcement Learning, and provide the first sample complexity bound for both problems.

Related articles: Most relevant | Search more
arXiv:2305.18438 [cs.LG] (Published 2023-05-29)
Reinforcement Learning with Human Feedback: Learning Dynamic Choices via Pessimism
arXiv:2402.17747 [cs.LG] (Published 2024-02-27, updated 2024-06-08)
When Your AIs Deceive You: Challenges of Partial Observability in Reinforcement Learning from Human Feedback
arXiv:2312.11456 [cs.LG] (Published 2023-12-18, updated 2024-01-28)
Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint
Wei Xiong et al.