Speaker-Sensitive Dual Memory Networks for Multi-Turn Slot Tagging

11/29/2017
by   Young-Bum Kim, et al.
0

In multi-turn dialogs, natural language understanding models can introduce obvious errors by being blind to contextual information. To incorporate dialog history, we present a neural architecture with Speaker-Sensitive Dual Memory Networks which encode utterances differently depending on the speaker. This addresses the different extents of information available to the system - the system knows only the surface form of user utterances while it has the exact semantics of system output. We performed experiments on real user data from Microsoft Cortana, a commercial personal assistant. The result showed a significant performance improvement over the state-of-the-art slot tagging models using contextual information.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset