Towards Understanding Language through Perception in Situated Human-Robot Interaction: From Word Grounding to Grammar Induction

12/12/2018
by   Amir Aly, et al.
0

Robots are widely collaborating with human users in diferent tasks that require high-level cognitive functions to make them able to discover the surrounding environment. A difcult challenge that we briefy highlight in this short paper is inferring the latent grammatical structure of language, which includes grounding parts of speech (e.g., verbs, nouns, adjectives, and prepositions) through visual perception, and induction of Combinatory Categorial Grammar (CCG) for phrases. This paves the way towards grounding phrases so as to make a robot able to understand human instructions appropriately during interaction.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset