Multi-task Learning for Universal Sentence Representations: What Syntactic and Semantic Information is Captured?

04/21/2018
by   Wasi Uddin Ahmad, et al.
0

Learning distributed sentence representations is one of the key challenges in natural language processing. Previous work demonstrated that a recurrent neural network (RNNs) based sentence encoder trained on a large collection of annotated natural language inference data, is efficient in the transfer learning to facilitate other related tasks. In this paper, we show that joint learning of multiple tasks results in better generalizable sentence representations by conducting extensive experiments and analysis comparing the multi-task and single-task learned sentence encoders. The quantitative analysis of the syntactic and semantic information captured by the sentence embeddings show that multi-task learning captures better syntactic information while the single task learning summarizes the semantic information coherently.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset