To my knowledge so far, I only see Diffusion model being used with image -- taking in 2D noise or image data and outputing 2D image. But what if the problem setting is quite different : For only one given 1D vector, I need to generate several random 3D models. How do we approach this by using Diffusion model or GAN if neccessary? Is there a more suitable approach than Diffusion model or GAN?
I would like to build a 1D-to-3D diffusion model (or GAN) that takes one 1D vector as an input but can output several 3D models, given an unlimited dataset of 1D vector corresponding to 3D model.