Text2Colors: Guiding Image Colorization through Text-Driven Palette Generation

04/11/2018
by   Wonwoong Cho, et al.
0

In this paper, we propose a novel approach to generate multiple color palettes that reflect the semantics of input text and then colorize a given grayscale image according to the generated color palette. In contrast to existing approaches, our model can understand rich text, whether it is a single word, a phrase, or a sentence, and generate multiple possible palettes from it. To achieve this task, we introduce our manually curated dataset called Palette-and-Text (PAT), which consists of 10,183 pairs of text and its corresponding color palette. Our proposed model consists of two conditional generative adversarial networks: the text-to-palette generation networks and the palette-based colorization networks. The former employs a sequence-to-sequence model with an attention module to capture the semantics of the text input and produce relevant color palettes. The latter utilizes a U-Net architecture to colorize a grayscale image using the generated color palette. Our evaluation results show that people preferred our generated palettes over ground truth palettes and that our model can effectively reflect the given palette when colorizing an image.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset