-2

I am a computer science student in my final year, working on my BSc thesis project. My project focuses on generating new motifs for Jamdani Saree, a traditional garment from Bangladesh, by combining two input motifs: one from the existing Jamdani tradition and another from the Parsi motif (the precursor of Jamdani).

I have been exploring various GAN (Generative Adversarial Network) architectures, including pix2pix, cycleGAN, and styleGAN. However, I've encountered some challenges specific to my project that these pre-existing architectures don't address:

Pix2pix: This GAN requires a pair of images - an input and a target image. However, I have two motifs as input, and I want to generate a new motif that combines elements from both.

CycleGAN: While it doesn't require image pairs, CycleGAN is designed to convert images from one set to another. In my case, I want to blend motifs from two sets to create something new, rather than performing a one-to-one conversion.

styleGAN: StyleGAN, although exceptional for generating realistic images and controlling attributes, may not be ideal for my project as it primarily focuses on photorealism and may not be suited for the creative blending of motifs.

I am seeking guidance on how to design and implement a custom GAN architecture tailored to this specific task. Any advice, resources, or guidelines regarding the architectural components, training data preparation, and loss functions would be greatly appreciated. Additionally, if there are any prior works or research papers that address similar challenges, please point me in the right direction.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197

0 Answers0