LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation

02/16/2023
by   Jiaxin Cheng, et al.
0

Layout-to-image generation refers to the task of synthesizing photo-realistic images based on semantic layouts. In this paper, we propose LayoutDiffuse that adapts a foundational diffusion model pretrained on large-scale image or text-image datasets for layout-to-image generation. By adopting a novel neural adaptor based on layout attention and task-aware prompts, our method trains efficiently, generates images with both high perceptual quality and layout alignment, and needs less data. Experiments on three datasets show that our method significantly outperforms other 10 generative models based on GANs, VQ-VAE, and diffusion models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset