-1

I've written my own implementation of StyleGAN (paper here https://arxiv.org/abs/1812.04948), using PyTorch instead of Tensorflow, which is what the official implementation uses. I'm doing this partly as an exercise in implementing a scientific paper from scratch.

I have done my best to reproduce all the features mentioned in the paper and in the ProgressiveGAN paper which it is based on, and the network trains, but I consistently get blurry images and blob-shaped artifacts:

Example 1 Example 2

I would very much like to know if anyone with experience of GANs in general or StyleGAN in particular has seen this phenomenon and can give me any insight into possible reasons for it.

(Some detail: I'm training on downsampled CelebA images, 600k images burn-in, 600k images fade-in, but I see very similar phenomena with a tiny toy dataset and a lot fewer iterations.)

1 Answers1

1

I've been working with StyleGAN for a while and I couldn't guess the reason with such little information..

One possible reason is the effect of the truncation trick, this makes the results to represent an average face but with higher quality or deviate it to obtain results variability but with possibility of added artefacts as yours. Check how you implemented this trick in Pytorch.

I recommend you to check this repository (https://github.com/rosinality/style-based-gan-pytorch) where they implemented styleGAN in Pytorch. You could find if you are missing something from the model here.

Finally I would also suggest you to read StyleGAN2 paper (https://arxiv.org/abs/1912.04958) from the same authors where they explain how they solve a droplet artifacts and improve quality results from StyleGAN.

mgrau
  • 51
  • 5
  • 1
    Thank you for your response. I was already considering moving on to StyleGAN2, but as this is partly something I do in order to learn, I hoped to approximate the original paper first with this implementation. I'm not using the truncation trick; I'm not too concerned with top quality for now, and these artifacts don't look like the sort of error I'd expect from latent outliers, plus I see them on every single sample, which also hints the truncation trick won't help. I might try sampling the mode to be sure. Thanks for the link to the PyTorch implementation. I'll check it out. – Kristoffer Sjöö Aug 27 '20 at 13:25