A system for generating 3D point clouds from complex prompts

-

While recent work on text-conditional 3D object generation has shown promising results, the state-of-the-art methods typically require multiple GPU-hours to supply a single sample. That is in stark contrast to state-of-the-art generative image models, which produce samples in quite a lot of seconds or minutes. On this paper, we explore another method for 3D object generation which produces 3D models in just 1-2 minutes on a single GPU. Our method first generates a single synthetic view using a text-to-image diffusion model, after which produces a 3D point cloud using a second diffusion model which conditions on the generated image. While our method still falls wanting the state-of-the-art by way of sample quality, it’s one to 2 orders of magnitude faster to sample from, offering a practical trade-off for some use cases. We release our pre-trained point cloud diffusion models, in addition to evaluation code and models, at this https URL.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

3 COMMENTS

0 0 votes
Article Rating
guest
3 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

3
0
Would love your thoughts, please comment.x
()
x