0

Loss Output

Epoch 996 Output

I have been working on a deep convolutional generative adversarial network (DCGAN) that generates pictures of cats (RGB, 64x64 pixels). It seems to learn rather quickly, as it is clear the images are cats by around the 300th epoch. For some reason, even after 1000 epochs, they have a good amount of blur on them, which is preventing them from being their full resolution. I am almost certain the issue is in my generator network structure, so I have attached it below.

model = tf.keras.Sequential()
model.add(layers.Dense(8*8*256, use_bias=False, input_shape=(100,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((8, 8, 256)))
assert model.output_shape == (None, 8, 8, 256)

model.add(layers.Conv2DTranspose(256, (5, 5), strides=(1, 1), padding='same', use_bias=False))
assert model.output_shape == (None, 8, 8, 256)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 16, 16, 128)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2), padding='same', use_bias=False))
assert model.output_shape == (None, 32, 32, 64)
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())

# activation is tanh to not squash out the negatives thats we've been keeping through leakyReLU
model.add(layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh'))
print("model.output_shape=", model.output_shape)
assert model.output_shape == (None, 64, 64, 3)[[enter image description here](https://i.stack.imgur.com/eICCQ.png)](https://i.stack.imgur.com/pfMY5.png)

I suspect the problems results from the artifacts generated due to my use of Conv2DTranspose layers, but is it worth it to switch to Upscaling followed by a Convolutional layer? I feel like it would do less learning this way.

0 Answers0