Concatenated p-mean Word Embeddings as Universal Cross-Lingual Sentence Representations

03/04/2018
by   Andreas Rücklé, et al.
0

Average word embeddings are a common baseline for more sophisticated sentence embedding techniques. An important advantage of average word embeddings is their computational and conceptual simplicity. However, they typically fall short of the performances of more complex models such as InferSent. Here, we generalize the concept of average word embeddings to p-mean word embeddings, which are (almost) as efficiently computable. We show that the concatenation of different types of p-mean word embeddings considerably closes the gap to state-of-the-art methods such as InferSent monolingually and substantially outperforms these more complex techniques cross-lingually. In addition, our proposed method outperforms different recently proposed baselines such as SIF and Sent2Vec by a solid margin, thus constituting a much harder-to-beat monolingual baseline for a wide variety of transfer tasks. Our data and code are publicly available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset