Extracting Similar Questions From Naturally-occurring Business Conversations

06/03/2022
by   Xiliang Zhu, et al.
0

Pre-trained contextualized embedding models such as BERT are a standard building block in many natural language processing systems. We demonstrate that the sentence-level representations produced by some off-the-shelf contextualized embedding models have a narrow distribution in the embedding space, and thus perform poorly for the task of identifying semantically similar questions in real-world English business conversations. We describe a method that uses appropriately tuned representations and a small set of exemplars to group questions of interest to business users in a visualization that can be used for data exploration or employee coaching.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset