Modeling Survival in model-based Reinforcement Learning

04/18/2020
by   SM, et al.
0

Although recent model-free reinforcement learning algorithms have been shown to be capable of mastering complicated decision-making tasks, the sample complexity of these methods has remained a hurdle to utilizing them in many real-world applications. In this regard, model-based reinforcement learning proposes some remedies. Yet, inherently, model-based methods are more computationally expensive and susceptible to sub-optimality. One reason is that model-generated data are always less accurate than real data, and this often leads to inaccurate transition and reward function models. With the aim to mitigate this problem, this work presents the notion of survival by discussing cases in which the agent's goal is to survive and its analogy to maximizing the expected rewards. To that end, a substitute model for the reward function approximator is introduced that learns to avoid terminal states rather than to maximize accumulated rewards from safe states. Focusing on terminal states, as a small fraction of state-space, reduces the training effort drastically. Next, a model-based reinforcement learning method is proposed (Survive) to train an agent to avoid dangerous states through a safety map model built upon temporal credit assignment in the vicinity of terminal states. Finally, the performance of the presented algorithm is investigated, along with a comparison between the proposed and current methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset