Clinical Concept Embeddings Learned from Massive Sources of Medical Data

04/04/2018
by   Andrew L. Beam, et al.
0

Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset