Truncated Emphatic Temporal Difference Methods for Prediction and Control

08/11/2021
by   Shangtong Zhang, et al.
0

Emphatic Temporal Difference (TD) methods are a class of off-policy Reinforcement Learning (RL) methods involving the use of followon traces. Despite the theoretical success of emphatic TD methods in addressing the notorious deadly triad (Sutton and Barto, 2018) of off-policy RL, there are still three open problems. First, the motivation for emphatic TD methods proposed by Sutton et al. (2016) does not align with the convergence analysis of Yu (2015). Namely, a quantity used by Sutton et al. (2016) that is expected to be essential for the convergence of emphatic TD methods is not used in the actual convergence analysis of Yu (2015). Second, followon traces typically suffer from large variance, making them hard to use in practice. Third, despite the seminal work of Yu (2015) confirming the asymptotic convergence of some emphatic TD methods for prediction problems, there is still no finite sample analysis for any emphatic TD method for prediction, much less control. In this paper, we address those three open problems simultaneously via using truncated followon traces in emphatic TD methods. Unlike the original followon traces, which depend on all previous history, truncated followon traces depend on only finite history, reducing variance and enabling the finite sample analysis of our proposed emphatic TD methods for both prediction and control.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset