Hierarchical text-conditional image generation with CLIP latents

-

Contrastive models like CLIP have been shown to learn robust representations of images that capture each semantics and magnificence. To leverage these representations for image generation, we propose a two-stage model: a previous that generates a CLIP image embedding given a text caption, and a decoder that generates a picture conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations may also produce variations of a picture that preserve each its semantics and magnificence, while various the non-essential details absent from the image representation. Furthermore, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with each autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

2 COMMENTS

0 0 votes
Article Rating
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

2
0
Would love your thoughts, please comment.x
()
x