ArXiv Preprint
KL-regularized reinforcement learning from expert demonstrations has proved
successful in improving the sample efficiency of deep reinforcement learning
algorithms, allowing them to be applied to challenging physical real-world
tasks. However, we show that KL-regularized reinforcement learning with
behavioral reference policies derived from expert demonstrations can suffer
from pathological training dynamics that can lead to slow, unstable, and
suboptimal online learning. We show empirically that the pathology occurs for
commonly chosen behavioral policy classes and demonstrate its impact on sample
efficiency and online policy performance. Finally, we show that the pathology
can be remedied by non-parametric behavioral reference policies and that this
allows KL-regularized reinforcement learning to significantly outperform
state-of-the-art approaches on a variety of challenging locomotion and
dexterous hand manipulation tasks.
Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal, Yee Whye Teh
2022-12-28