Agent-Level Maximum Entropy Inverse Reinforcement Learning for Mean Field Games

Abstract

Mean field games (MFG) facilitate the otherwise intractable reinforcement learning (RL) in large-scale multi-agent systems, through reducing interplays among agents to those between an individual agent and the average effect from the population. However, RL agents are notoriously prone to unexpected behaviours due to reward mis-specification. While inverse RL (IRL) holds promise for automatically acquiring suitable reward functions from demonstrations, its extension to MFG is challenging due to the complex notion of mean-field-type equilibria and the coupling between agent-level and population-level dynamics. To this end, we propose a novel IRL framework for MFG, called Mean Field IRL (MFIRL), where we build upon a new equilibrium concept that incorporates the causal entropy regularisation. Crucially, MFIRL achieves for the first time an unbiased inference for agent-level (ground-truth) reward signals for MFG. Experiments show the superior performance of MFIRL on sample efficiency, reward recovery and re-optimisation under varying environment dynamics, compared to the state-of-the-art method.

Publication
arXiv 2021