Near-optimal Reinforcement Learning in Factored MDPs
Any reinforcement learning algorithm that applies to all Markov decision processes (MDPs) will suffer Ω(√(SAT)) regret on some MDP, where T is the elapsed time and S and A are the cardinalities of the state and action spaces. This implies T = Ω(SA) time to guarantee a near-optimal policy. In many settings of practical interest, due to the curse of dimensionality, S and A can be so enormous that this learning time is unacceptable. We establish that, if the system is known to be a factored MDP, it is possible to achieve regret that scales polynomially in the number of parameters encoding the factored MDP, which may be exponentially smaller than S or A. We provide two algorithms that satisfy near-optimal regret bounds in this context: posterior sampling reinforcement learning (PSRL) and an upper confidence bound algorithm (UCRL-Factored).
READ FULL TEXT