Near-optimal Reinforcement Learning in Factored MDPs

03/15/2014
by   Ian Osband, et al.
0

Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suffer Ω(√(SAT)) regret on some MDP, where T is the elapsed time and S and A are the cardinalities of the state and action spaces. This implies T = Ω(SA) time to guarantee a near-optimal policy. In many settings of practical interest, due to the curse of dimensionality, S and A can be so enormous that this learning time is unacceptable. We establish that, if the system is known to be a factored MDP, it is possible to achieve regret that scales polynomially in the number of parameters encoding the factored MDP, which may be exponentially smaller than S or A. We provide two algorithms that satisfy near-optimal regret bounds in this context: posterior sampling reinforcement learning (PSRL) and an upper confidence bound algorithm (UCRL-Factored).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset