-2

Below code takes only 32*32 input, I want to feed in 128*128 images, how to go about it. The code is from the tutorial - https://github.com/awjuliani/TF-Tutorials/blob/master/DCGAN.ipynb

def generator(z):

zP = slim.fully_connected(z,4*4*256,normalizer_fn=slim.batch_norm,\
    activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer)
zCon = tf.reshape(zP,[-1,4,4,256])

gen1 = slim.convolution2d_transpose(\
    zCon,num_outputs=64,kernel_size=[5,5],stride=[2,2],\
    padding="SAME",normalizer_fn=slim.batch_norm,\
    activation_fn=tf.nn.relu,scope='g_conv1', weights_initializer=initializer)

gen2 = slim.convolution2d_transpose(\
    gen1,num_outputs=32,kernel_size=[5,5],stride=[2,2],\
    padding="SAME",normalizer_fn=slim.batch_norm,\
    activation_fn=tf.nn.relu,scope='g_conv2', weights_initializer=initializer)

gen3 = slim.convolution2d_transpose(\
    gen2,num_outputs=16,kernel_size=[5,5],stride=[2,2],\
    padding="SAME",normalizer_fn=slim.batch_norm,\
    activation_fn=tf.nn.relu,scope='g_conv3', weights_initializer=initializer)

g_out = slim.convolution2d_transpose(\
    gen3,num_outputs=1,kernel_size=[32,32],padding="SAME",\
    biases_initializer=None,activation_fn=tf.nn.tanh,\
    scope='g_out', weights_initializer=initializer)

return g_out

def discriminator(bottom, reuse=False):

dis1 = slim.convolution2d(bottom,16,[4,4],stride=[2,2],padding="SAME",\
    biases_initializer=None,activation_fn=lrelu,\
    reuse=reuse,scope='d_conv1',weights_initializer=initializer)

dis2 = slim.convolution2d(dis1,32,[4,4],stride=[2,2],padding="SAME",\
    normalizer_fn=slim.batch_norm,activation_fn=lrelu,\
    reuse=reuse,scope='d_conv2', weights_initializer=initializer)

dis3 = slim.convolution2d(dis2,64,[4,4],stride=[2,2],padding="SAME",\
    normalizer_fn=slim.batch_norm,activation_fn=lrelu,\
    reuse=reuse,scope='d_conv3',weights_initializer=initializer)

d_out = slim.fully_connected(slim.flatten(dis3),1,activation_fn=tf.nn.sigmoid,\
    reuse=reuse,scope='d_out', weights_initializer=initializer)

return d_out

Below is the error which I get when I feed 128*128 images.

 Trying to share variable d_out/weights, but specified shape (1024, 1) and found shape (16384, 1).
Shubham J
  • 1
  • 6
  • Can you show us what you have tried so far? Doesn't that code work for nearly any 2D input? If it does not, then why not? Please update your question with these details. – E_net4 Aug 07 '17 at 17:37
  • Check the updated question – Shubham J Aug 08 '17 at 07:57

1 Answers1

0

The generator is generating 32*32 images, and thus when we feed any other dimension in discriminator, it results in the given error.

The solution is to generate 128*128 images from the generator, by 1. Adding more no. of layers(2 in this case) 2. Changing the input to the generator

zP = slim.fully_connected(z,16*16*256,normalizer_fn=slim.batch_norm,\
activation_fn=tf.nn.relu,scope='g_project',weights_initializer=initializer)
zCon = tf.reshape(zP,[-1,16,16,256])
Shubham J
  • 1
  • 6