Pragmatically Informative Color Generation by Grounding Contextual Modifiers

10/09/2020
by   Zhengxuan Wu, et al.
0

Grounding language in contextual information is crucial for fine-grained natural language understanding. One important task that involves grounding contextual modifiers is color generation. Given a reference color "green", and a modifier "bluey", how does one generate a color that could represent "bluey green"? We propose a computational pragmatics model that formulates this color generation task as a recursive game between speakers and listeners. In our model, a pragmatic speaker reasons about the inferences that a listener would make, and thus generates a modified color that is maximally informative to help the listener recover the original referents. In this paper, we show that incorporating pragmatic information provides significant improvements in performance compared with other state-of-the-art deep learning models where pragmatic inference and flexibility in representing colors from a large continuous space are lacking. Our model has an absolute 98 performance for the test cases where the reference colors are unseen during training, and an absolute 40 both the reference colors and the modifiers are unseen during training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset