So far I've seen upscaling in a net using conv transpose (for example in the DCGAN paper).
Now I'm reading a new article by Nvidia (growing GANs- see https://arxiv.org/abs/1710.10196) where they are using tf.tile to "upscale",after which they are using regular convolutions (same padding).
Where can I read more about Nvidia's researchers approach? What is the trade-off between those two approaches?