Non-Deterministic Policy Improvement Stabilizes Approximated Reinforcement Learning

12/22/2016
by   Wendelin Böhmer, et al.
0

This paper investigates a type of instability that is linked to the greedy policy improvement in approximated reinforcement learning. We show empirically that non-deterministic policy improvement can stabilize methods like LSPI by controlling the improvements' stochasticity. Additionally we show that a suitable representation of the value function also stabilizes the solution to some degree. The presented approach is simple and should also be easily transferable to more sophisticated algorithms like deep reinforcement learning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset