Multi-Domain Dialogue State Tracking – A Purely Transformer-Based Generative Approach

10/24/2020
by   Yan Zeng, et al.
0

We investigate the problem of multi-domain Dialogue State Tracking (DST) with open vocabulary. Existing approaches exploit BERT encoder and copy-based RNN decoder, where the encoder first predicts the state operation, and then the decoder generates new slot values. However, in this stacked encoder-decoder structure, the operation prediction objective only affects the BERT encoder and the value generation objective mainly affects the RNN decoder. In this paper, we propose a purely Transformer-based framework that uses BERT as both encoder and decoder. In so doing, the operation prediction objective and the value generation objective can jointly optimize our model for DST. At the decoding step, we re-use the hidden states of the encoder in the self-attention mechanism of the corresponding decoder layer to construct a flat model structure for effective parameter updating. Experimental results show that our approach substantially outperforms the existing state-of-the-art framework, and it also achieves very competitive performance to the best ontology-based approaches.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset