Convergent Tree-Backup and Retrace with Function Approximation

05/25/2017
by   Ahmed Touati, et al.
0

Off-policy learning is key to scaling up reinforcement learning as it allows to learn about a target policy from the experience generated by a different behavior policy. Unfortunately, it has been challenging to combine off-policy learning with function approximation and multi-step bootstrapping in a way that leads to both stable and efficient algorithms. In this paper, we show that the Tree Backup and Retrace algorithms are unstable with linear function approximation, both in theory and with specific examples. Based on our analysis, we then derive stable and efficient gradient-based algorithms, compatible with accumulating or Dutch traces, using a novel methodology based on saddle-point methods. In addition to convergence guarantees, we provide finite-sample analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset