Distributionally Robust Offline Reinforcement Learning with Linear Function Approximation

09/14/2022
by   Xiaoteng Ma, et al.
0

Among the reasons hindering reinforcement learning (RL) applications to real-world problems, two factors are critical: limited data and the mismatch between the testing environment (real environment in which the policy is deployed) and the training environment (e.g., a simulator). This paper attempts to address these issues simultaneously with distributionally robust offline RL, where we learn a distributionally robust policy using historical data obtained from the source environment by optimizing against a worst-case perturbation thereof. In particular, we move beyond tabular settings and consider linear function approximation. More specifically, we consider two settings, one where the dataset is well-explored and the other where the dataset has sufficient coverage. We propose two algorithms – one for each of the two settings – that achieve error bounds Õ(d^1/2/N^1/2) and Õ(d^3/2/N^1/2) respectively, where d is the dimension in the linear function approximation and N is the number of trajectories in the dataset. To the best of our knowledge, they provide the first non-asymptotic results of the sample complexity in this setting. Diverse experiments are conducted to demonstrate our theoretical findings, showing the superiority of our algorithm against the non-robust one.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset