I am implementing an autoencoder, used to rebuild color images. The loss function I want to use requires a reduced color set (max ~100 different colors) but I am struggling to find a suitable differentiable algorithm.
Another doubt I have is the following: is it better to apply such quantization directly in the loss function, or can I implement it in a custom non-trainable layer? In the second case, need the algorithm to be differentiable?
My first idea approaching this problem was to quantize the images before feeding them to the network, but I don`t know how to "force" the network to produce only the quantized colors as output.
Any suggestion is greatly appreciated, I do not need code, just some ideas or new perspectives. Being pretty new to Tensorflow I am probably missing something.