0

As we all know, the JPEG decoding process is shown in the following:

  • VLD - Variable length decoding,
  • ZZ - Zigzag scan,
  • DQ - Dequantization,
  • IDCT - Inverse discrete cosine transform,
  • Color conversion (YUV to RGB) and reorder.

My question is: for different characters of different JPEG images, which of the above decoding process will take more time?

For example:

For decoding this type of image with noise, which of the above Five process will take relatively more time?

Another example:

For two same images wit different quality, which one of the above Five processes will take more time when decoding a image with higher quality?

fluency03
  • 2,637
  • 7
  • 32
  • 62
  • For a fully SIMD optimized JPEG encoder or decoder, the entropy decoding/coding is the most time consuming part. Table lookups and variable length code access can't be vectorized (optimized with SIMD) and ends up taking the most time. – BitBank Jun 22 '15 at 16:52
  • One more thought - progressive JPEG images take longer to encode and decode because the MCUs for the entire image must be kept in memory before the final output can be generated. It causes many more cache misses than baseline encoded images. – BitBank Jun 22 '15 at 16:54
  • currently I only consider baseline jpeg. I just would like to know for which of the process, noise in the image will have the most impact. – fluency03 Jun 22 '15 at 17:32
  • And, for jpeg with different quality (100%, 90%, 50% etc), which processes are mostly influenced? – fluency03 Jun 22 '15 at 17:33
  • Noise will create more high frequency A/C coefficients which will make the file bigger and cause the variable-length code decoding stage to take longer. The other stages should be unaffected by the content of the image. – BitBank Jun 22 '15 at 17:33
  • So, these noise will not have impact on IDCT? What about the impact of images with different quality on the decoding process? – fluency03 Jun 22 '15 at 17:38
  • Different quality settings affect the number of A/C coefficients per MCU. Again this only affects entropy decoding stage. If the image dimensions are unchanged, then the other stages of the decode should be a constant speed. – BitBank Jun 22 '15 at 17:39

1 Answers1

0

JPEG tends to compress linearly with size (timewise). The major factor in that would affect the decoding time is whether you use sequential or progressive scans. In sequential, each component is processed once. In progressive, it is at least 2 and possibly as many as 500 (but that would be absurd) for each component.

For your specific questions:

VLD - Variable length decoding,

It depends upon whether you do this once (sequential) or multiple times (progressive)

ZZ - Zigzag scan

Easy to do. Array indices.

DQ - Dequantization

Depends upon how many times you do it. Once for sequential. It could be done multiple times for progressive (but does not need to be unless you want continuous updates on the screen).

IDCT - Inverse discrete cosine transform,

This depends a lot on the algorithm used, whether it is done using scaled integers or floating, and if it is down multiple times (as may (or may not) be done with a progressive JPEG)

Color conversion (YUV to RGB) and reorder.

You only have to do this once. However, if there is sampling, it gets more complicated.

In other words, the decoding time is the same no matter what the image is. However, the decoding time depends upon how that image is encoded.

I qualify that by saying that a smaller file tends to decode faster than a larger one simply because of the time to read off the disk. The more random the data is the larger the file tends to be. It often takes more time to read and display a large BMP file than a JPEG of the same image because of the file size.

user3344003
  • 20,574
  • 3
  • 26
  • 62