1

After playing around with the PyTorch DCGAN faces tutorial, i started working with my own dataset which consists of images with size 1x32x32 (channel, height, width).

Now, i did apply most of the things from this repository: https://github.com/soumith/ganhacks

But currently i am stuck.

I made this argument to choose whether to train the generator (G) or the discriminator (D).

    if i > 1:
        if D_G_z1 < 0.5:
            train_G = True
            train_D = False
        else:
            train_D = True
            train_G = False

Where i is the current batch number, train_D and train_G are set to True on batch one. D_G_z1 is D(G(x)).

I'd expect that once D is trained and D(G(x)) = 0.5, D will stop training and G will start training to improve the realism of the generated images, etc. Now D and G are training when the conditions are met.

However, the loss of G is stuck at 0.7 after 5 epochs and doesn't seem do change with 1k epochs (i havn't tried more). Changing the learning rate for G, or make G more/less complex by changing the amount of channels per ConvTranspose2d layer doesn't help either.

What's the best approach now? Any advice would be appreciated.

The code is found here: https://github.com/deKeijzer/SRON-DCGAN/blob/master/notebooks/ExoGAN_v1.ipynb

TLDR: Generator loss is stuck at 0.7, does't change anymore. Neither did it 'learn' a good representation of X.

deKeijzer
  • 480
  • 5
  • 21

0 Answers0