0

To my knowledge so far, I only see Diffusion model being used with image -- taking in 2D noise or image data and outputing 2D image. But what if the problem setting is quite different : For only one given 1D vector, I need to generate several random 3D models. How do we approach this by using Diffusion model or GAN if neccessary? Is there a more suitable approach than Diffusion model or GAN?

I would like to build a 1D-to-3D diffusion model (or GAN) that takes one 1D vector as an input but can output several 3D models, given an unlimited dataset of 1D vector corresponding to 3D model.

HammJ
  • 1
  • 1
  • The word "model" is overloaded here, Diffusion and GAN are models by themselves, what do you mean by "generating several 3D models given a 1D vector"? – ComeOnGetMe Apr 17 '23 at 17:40
  • I see, thank you! So at first, I performed a topological analysis called "Persistent homology" on a 3D model. The output is 1D vector corresponding to that 3D model. So now I can transform 3D data into 1D data. But the problem is I feel hopeless to transform that 1D data back to 3D data so that each time I run the code a similar-but-different 3D data would be generated. So that's the "generating several 3D models given a 1D vector". I am sorry I didn't make it clear at first. The reason I need several 3D data for one 1D data is that some 3D data could have a similar corresponding 1D data. – HammJ Apr 18 '23 at 01:49
  • Check out the text-to-3D diffusion model from Google: https://dreamfusion3d.github.io/ – ComeOnGetMe Apr 18 '23 at 06:35

0 Answers0