Multi-Fidelity Reinforcement Learning with Gaussian Processes

12/18/2017
by   Varun Suryan, et al.
0

This paper studies the problem of Reinforcement Learning (RL) using as few real-world samples as possible. A naive application of RL algorithms can be inefficient in large and continuous state spaces. We present two versions of Multi-Fidelity Reinforcement Learning (MFRL) algorithm that leverage Gaussian Processes (GPs) to learn the optimal policy in a real-world environment. In MFRL framework, an agent uses multiple simulators of the real environment to perform actions. With increasing fidelity in a simulator chain, the number of samples used in successively higher simulators can be reduced. By incorporating GPs in MFRL framework, further reduction in the number of learning samples can be achieved as we move up the simulator chain. We examine the performance of our algorithms with the help of real-world experiments for navigation with a ground robot.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset