Document Sub-structure in Neural Machine Translation
Current approaches to machine translation (MT) either translate sentences in isolation, disregarding the context they appear in, or model context on the level of the full document, without a notion of any internal structure the document may have. In this work we consider the fact that documents are rarely homogeneous blocks of text, but rather consist of parts covering different topics. Some documents, e.g. biographies and encyclopedia entries have highly predictable, regular structures in which sections are characterised by different topics. We draw inspiration from Louis and Webber (2014) who use this information to improve MT and transfer their proposal into the framework of neural MT. We compare two different methods of including information about the topic of the section within which each sentence is found: one using side constraints and the other using a cache-based model. We create and release the data on which we run our experiments – parallel corpora for three language pairs (Chinese-English, French-English, Bulgarian-English) from Wikipedia biographies, preserving the boundaries of sections within the articles.
READ FULL TEXT