I am trying to generate synthetic handwritten data. I came across GAN being used to generate a single character image.
Generator: upsamples a random tensor to an image.
Discriminator: uses real data to classify real/generated.
But we can't control which character output we need.
I have a handwritten dataset containing images of words (IAM).
If for word level do we need separate GAN's for each word?
I need to annotate the output image to a word(for word recognitions).
Is there a GAN network that output a synthetic handwritten images for words, not in the training dataset?