0

Suppose we have formed the codebook using the RGB training images. This codebook is now present at the encoder and the decoder.

Now we have one RGB test image (which is not contained in the training images) which we want to compress, transmit and reconstruct. During the reconstruction of this test image, due to the different intensities of the test image, some of which might completely not match with any training image intensity, wouldn't parts of the reconstructed image be darker or brighter than the original image in existing vector quantization algorithms? Is there any way deal with the intensities in existing algorithms like K-means, LBG? Or should we make an appropriate choice of the training images to begin with? Or should the test image be also included within the training images? What is the standard way?

Sara
  • 81
  • 1
  • 9

1 Answers1

0

Vector Quantization is a lossy compression scheme. You are finding the best-match clusters within the training set to create the codebook. It is an approximation. The larger the training set, the better the match will be, but there will always be loss.

Your training set needs to account for all intensities (complexities) of images, not only the intensity of the image you intend to compress. Whether or not the training images contain the test image won't change the fact that loss will occur (any gain will be insignificant, unless the training set is very small).

Sterls
  • 723
  • 12
  • 22