I've written my own implementation of StyleGAN (paper here https://arxiv.org/abs/1812.04948), using PyTorch instead of Tensorflow, which is what the official implementation uses. I'm doing this partly as an exercise in implementing a scientific paper from scratch.
I have done my best to reproduce all the features mentioned in the paper and in the ProgressiveGAN paper which it is based on, and the network trains, but I consistently get blurry images and blob-shaped artifacts:
I would very much like to know if anyone with experience of GANs in general or StyleGAN in particular has seen this phenomenon and can give me any insight into possible reasons for it.
(Some detail: I'm training on downsampled CelebA images, 600k images burn-in, 600k images fade-in, but I see very similar phenomena with a tiny toy dataset and a lot fewer iterations.)