Risk-Averse Model Uncertainty for Distributionally Robust Safe Reinforcement Learning

01/30/2023
by   James Queeney, et al.
0

Many real-world domains require safe decision making in the presence of uncertainty. In this work, we propose a deep reinforcement learning framework for approaching this important problem. We consider a risk-averse perspective towards model uncertainty through the use of coherent distortion risk measures, and we show that our formulation is equivalent to a distributionally robust safe reinforcement learning problem with robustness guarantees on performance and safety. We propose an efficient implementation that only requires access to a single training environment, and we demonstrate that our framework produces robust, safe performance on a variety of continuous control tasks with safety constraints in the Real-World Reinforcement Learning Suite.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset