Convergent Tree-Backup and Retrace with Function Approximation
Off-policy learning is key to scaling up reinforcement learning as it allows to learn about a target policy from the experience generated by a different behavior policy. Unfortunately, it has been challenging to combine off-policy learning with function approximation and multi-step bootstrapping in a way that leads to both stable and efficient algorithms. In this paper, we show that the Tree Backup and Retrace algorithms are unstable with linear function approximation, both in theory and with specific examples. Based on our analysis, we then derive stable and efficient gradient-based algorithms, compatible with accumulating or Dutch traces, using a novel methodology based on saddle-point methods. In addition to convergence guarantees, we provide finite-sample analysis.
READ FULL TEXT