A Finite Sample Complexity Bound for Distributionally Robust Q-learning

02/26/2023
by   Shengbo Wang, et al.
0

We consider a reinforcement learning setting in which the deployment environment is different from the training environment. Applying a robust Markov decision processes formulation, we extend the distributionally robust Q-learning framework studied in Liu et al. [2022]. Further, we improve the design and analysis of their multi-level Monte Carlo estimator. Assuming access to a simulator, we prove that the worst-case expected sample complexity of our algorithm to learn the optimal robust Q-function within an ϵ error in the sup norm is upper bounded by Õ(|S||A|(1-γ)^-5ϵ^-2p_∧^-6δ^-4), where γ is the discount rate, p_∧ is the non-zero minimal support probability of the transition kernels and δ is the uncertainty size. This is the first sample complexity result for the model-free robust RL problem. Simulation studies further validate our theoretical results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset