Versatile Inverse Reinforcement Learning via Cumulative Rewards

Niklas Freymuth, Philipp Becker , Gerhard Neumann

Published in Workshop on Robot Learning: Self-Supervised and Lifelong Learning @ NeurIPS, 2021

Abstract:

Inverse Reinforcement Learning infers a reward function from expert demonstrations, aiming to encode the behavior and intentions of the expert. Current approaches usually do this with generative and uni-modal models, meaning that they encode a single behavior. In the common setting, where there are various solutions to a problem and the experts show versatile behavior this severely limits the generalization capabilities of these methods. We propose a novel method for Inverse Reinforcement Learning that overcomes these problems by formulating the recovered reward as a sum of iteratively trained discriminators. We show on simulated tasks that our approach is able to recover general, high-quality reward functions and produces policies of the same quality as behavioral cloning approaches designed for versatile behavior.

Paper