I want to implement a Generative adversarial network (GAN) with unfixed input size, like 4-D Tensor (Batch_size, None, None, 3)
.
But when I use conv2d_transpose, there is a parameter output_shape
, this parameter must pass the true size
after deconvolution opeartion.
For example, if the size of batch_img is (64, 32, 32, 128), w is weight with (3, 3, 64, 128)
, after
deconv = tf.nn.conv2d_transpose(batch_img, w, output_shape=[64, 64, 64, 64],stride=[1,2,2,1], padding='SAME')
So, I get deconv
with size (64, 64, 64, 64)
, it's ok if I pass the true size of output_shape
.
But, I want to use unfixed input size (64, None, None, 128)
, and get deconv
with (64, None, None, 64)
.
But, it raises an error as below.
TypeError: Failed to convert object of type <type'list'> to Tensor...
So, what can I do to avoid this parameter in deconv? or is there another way to implement unfixed GAN?