-1

I am writing a program to read some text files and write it to a JPEG file using libjpeg. When I set the quality to 100 (withjpeg_set_quality), there is actually no quality degradation in grayscale. However, when I move to RGB, even with a quality of 100, there seems to be compression.

When I give this input to convert to a grayscale JPEG image it works nicely and gives me a clean JPEG image:

0  0  0 0 0
0  0  0 0 0
0  0  0 0 0
0 255 0 0 0
255 0 0 0 0

The (horizontally flipped) output is:

grayscale

Now when I assume that array was the Red color, and use the following two arrays for the Green and Blue colors respectively:

0 0  0  0 0
0 0  0  0 0
0 0 255 0 0
0 0  0  0 0
0 0  0  0 0


0 0 0  0 255
0 0 0 255 0
0 0 0  0  0
0 0 0  0  0
0 0 0  0  0

This is the color output I get:

RGB

While only 5 input pixels have any color value, the surrouding pixels have also gotten a value when converted to color. For both the grayscale image and RGB image the quality was set to 100.

I wanted to see what is causing this and how I can fix it so the colors are also only used for the pixels that actually have an input value?

makhlaghi
  • 3,856
  • 6
  • 27
  • 34

2 Answers2

1

You are getting errors from the RGB->YCbCr conversion. That is impossible to avoid in the large because there is not a 1:1 mapping between the two color spaces.

user3344003
  • 20,574
  • 3
  • 26
  • 62
0

The fix is easy - just don't use jpeg. Png is a better choice for your use case.

What you are seeing is result of how jpeg compression works, there is such a thing as "lossless jpeg" but its really a completely different file format that isn't well supported.

Nir
  • 29,306
  • 10
  • 67
  • 103
  • Besides RGB, I also need CMYK and so I can't use PNG. I don't understand how in grayscale there is no compression but in color there is!?! – makhlaghi Feb 13 '15 at 09:04