Adaptive Variance for Changing Sparse-Reward Environments

03/15/2019
by   Xingyu Lin, et al.
0

Robots that are trained to perform a task in a fixed environment often fail when facing unexpected changes to the environment due to a lack of exploration. We propose a principled way to adapt the policy for better exploration in changing sparse-reward environments. Unlike previous works which explicitly model environmental changes, we analyze the relationship between the value function and the optimal exploration for a Gaussian-parameterized policy and show that our theory leads to an effective strategy for adjusting the variance of the policy, enabling fast adapt to changes in a variety of sparse-reward environments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset