Bias-reduced multi-step hindsight experience replay

02/25/2021
by   Rui Yang, et al.
1

Multi-goal reinforcement learning is widely used in planning and robot manipulation. Two main challenges in multi-goal reinforcement learning are sparse rewards and sample inefficiency. Hindsight Experience Replay (HER) aims to tackle the two challenges with hindsight knowledge. However, HER and its previous variants still need millions of samples and a huge computation. In this paper, we propose Multi-step Hindsight Experience Replay (MHER) based on n-step relabeling, incorporating multi-step relabeled returns to improve sample efficiency. Despite the advantages of n-step relabeling, we theoretically and experimentally prove the off-policy n-step bias introduced by n-step relabeling may lead to poor performance in many environments. To address the above issue, two bias-reduced MHER algorithms, MHER(λ) and Model-based MHER (MMHER) are presented. MHER(λ) exploits the λ return while MMHER benefits from model-based value expansions. Experimental results on numerous multi-goal robotic tasks show that our solutions can successfully alleviate off-policy n-step bias and achieve significantly higher sample efficiency than HER and Curriculum-guided HER with little additional computation beyond HER.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset