Making efficient use of demonstrations to solve hard exploration problems
We propose a novel imitation learning algorithm called R2D3 that can learn to solve sparse reward and hard-exploration problems with sparse rewards.
This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks where other state of the art methods (both with and without demonstrations) fail to see even a single successful trajectory after tens of billions of steps of exploration.
ArXiv Link: https://arxiv.org/abs/1909.01387