EU Horizon 2020
Horizon 2020
HomeNewsResearch ThemesPeopleKey Prior PublicationsPublicationsWorkshop
[BGMK25] Thomas Kleine Buening, Jiarui Gan, Debmalya Mandal, Marta Kwiatkowska. Strategyproof Reinforcement Learning from Human Feedback. Technical report arXiv:2503.09561v1, . To appear. 2025. [pdf] [bib]
Downloads:  pdf pdf (369 KB)  bib bib
Abstract. We study Reinforcement Learning from Human Feedback (RLHF), where multiple individuals with diverse preferences provide feedback strategically to sway the final policy in their favor. We show that existing RLHF methods are not strategyproof, which can result in learning a substantially misaligned policy even when only one out of k individuals reports their preferences strategically. In turn, we also find that any strategyproof RLHF algorithm must perform k-times worse than the optimal policy, highlighting an inherent trade-off between incentive alignment and policy alignment. We then propose a pessimistic median algorithm that, under appropriate coverage assumptions, is approximately strategyproof and converges to the optimal policy as the number of individuals and samples increases.