On Online Learning in Kernelized Markov Decision Processes

11/04/2019
by   Sayak Ray Chowdhury, et al.
0

We develop algorithms with low regret for learning episodic Markov decision processes based on kernel approximation techniques. The algorithms are based on both the Upper Confidence Bound (UCB) as well as Posterior or Thompson Sampling (PSRL) philosophies, and work in the general setting of continuous state and action spaces when the true unknown transition dynamics are assumed to have smoothness induced by an appropriate Reproducing Kernel Hilbert Space (RKHS).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset