I'm learning GAN and was trying to run the pix2pix GAN model on a custom dataset, my average generator loss per epoch and average Discriminator Fake and Real loss are as follows -
and
I just can't understand, how come my Generator loss decrease but discriminator fake image loss increase? From what I understood, it was supposed to go down like the generator. Can someone please help me understand the error I made or the training problem I'm facing?
Batch Size: 16
Epoch: 100
Learning Rate: 0.0008
L1 Lambda: 100
Optimizer : Gen - Adam ; Disc - SGD
BatchNORM used in Generator .