Mapping Navigation Instructions to Continuous Control Actions with Position-Visitation Prediction

11/10/2018
by   Valts Blukis, et al.
0

We propose an approach for mapping natural language instructions and raw observations to continuous control of a quadcopter drone. Our model predicts interpretable position-visitation distributions indicating where the agent should go during execution and where it should stop, and uses the predicted distributions to select the actions to execute. This two-step model decomposition allows for simple and efficient training using a combination of supervised learning and imitation learning. We evaluate our approach with a realistic drone simulator, and demonstrate absolute task-completion accuracy improvements of 16.85

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset