While encoding a raw image to a jpeg image, a 8x8 data unit is level shifted, transformed using a 2-D DCT, quantized and Huffman encoded.
I have first performed the Row DCT and then the column DCT and i have rounded the result to the nearest integer. I sent this block to quantization module. While quantizing i have used the following Q Tables. These tables are recommended by IJG for quality factor of 99.
Luma Tables1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 2 2 1
1 1 1 1 1 2 2 2
1 1 1 1 2 2 2 2
1 1 2 2 2 2 2 2
1 2 2 2 2 2 2 2
1 1 1 1 2 2 2 2
1 1 1 1 2 2 2 2
1 1 1 2 2 2 2 2
1 1 2 2 2 2 2 2
2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2
During quantization while dividing by '2' i have rounded the result away from zero. Example: 11/2 = 6. Hence an error of +1 will be added to every odd number during de-quantization while decoding.
In another set up i have changed my rounding technique. Here i am rounding the result towards zero. Example: 11/2 = 5. Hence an error of -1 will be added to every odd number during de-quantization while decoding.
In the second case i am getting very less file size (less by almost 100 kb for a 768x512 image) and more PSNR. I was able to explain the less file size by saying that all the AC coefficients which are '1' when quantized by 2 will now become 0 instead of 1. Hence RLE makes the file size lesser. But i am unable to explain why the encoding image quality is increasing. It is increasing by a factor of 2-3 dB and it happening for all the images i tested.
My argument is as the DCT is basically A * DCTmatrix equal error on either side should yield equal loss. But this is not the case here.