Training a cDCGAN model using PyTorch to generate Covid X-ray images at a resolution of 128x128. Currently limited to 2 classes over 500 epochs while developing.
Actual Losses Why does it behave like this?
- Class 0 - 6012 samples
- Class 1 - 10,912 samples
Expected losses (testing with a Rock Paper Scissors dataset)
Results with Rock Paper Scissors Dataset
This is the behaviour of the same script trained on the rock paper scissors dataset, why is the model behaving so differently for the Covid Xray image dataset?
- 840 samples for each class