Accelerated Reinforcement Learning for Temporal Logic Control Objectives

05/09/2022
by   Yiannis Kantaros, et al.
4

This paper addresses the problem of learning control policies for mobile robots modeled as unknown Markov Decision Processes (MDPs) that are tasked with temporal logic missions, such as sequencing, coverage, or surveillance. The MDP captures uncertainty in the workspace structure and the outcomes of control decisions. The control objective is to synthesize a control policy that maximizes the probability of accomplishing a high-level task, specified as a Linear Temporal Logic (LTL) formula. To address this problem, we propose a novel accelerated model-based reinforcement learning (RL) algorithm for LTL control objectives that is capable of learning control policies significantly faster than related approaches. Its sample-efficiency relies on biasing exploration towards directions that may contribute to task satisfaction. This is accomplished by leveraging an automaton representation of the LTL task as well as a continuously learned MDP model. Finally, we provide extensive comparative experiments that demonstrate the sample efficiency of the proposed method against recent temporal logic RL methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset