SKILL-IL: Disentangling Skill and Knowledge in Multitask Imitation Learning

05/06/2022
by   Bian Xihan, et al.
0

In this work, we introduce a new perspective for learning transferable content in multi-task imitation learning. Humans are able to transfer skills and knowledge. If we can cycle to work and drive to the store, we can also cycle to the store and drive to work. We take inspiration from this and hypothesize the latent memory of a policy network can be disentangled into two partitions. These contain either the knowledge of the environmental context for the task or the generalizable skill needed to solve the task. This allows improved training efficiency and better generalization over previously unseen combinations of skills in the same environment, and the same task in unseen environments. We used the proposed approach to train a disentangled agent for two different multi-task IL environments. In both cases we out-performed the SOTA by 30 task success rate. We also demonstrated this for navigation on a real robot.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset