Towards Physically Safe Reinforcement Learning under Supervision

01/19/2019
by   Yinan Zhang, et al.
0

This paper addresses the question of how a previously available control policy π_s can be used as a supervisor to more quickly and safely train a new learned control policy π_L for a robot. A weighted average of the supervisor and learned policies is used during trials, with a heavier weight initially on the supervisor, in order to allow safe and useful physical trials while the learned policy is still ineffective. During the process, the weight is adjusted to favor the learned policy. As weights are adjusted, the learned network must compensate so as to give safe and reasonable outputs under the different weights. A pioneer network is introduced that pre-learns a policy that performs similarly to the current learned policy under the planned next step for new weights; this pioneer network then replaces the currently learned network in the next set of trials. Experiments in OpenAI Gym demonstrate the effectiveness of the proposed method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset