0

I have some text file (plain ascii) which I wish to compress them with huffman coding, since the compressed file are going to be used in a limited resource hardware, the decompressing operation must be simplified as much as possible.

So I am thinking to create a Huffman table from my text files and compress the files with it and copy the compressed files and my unzip program(which uses a default Huffman table) into my LR hardware.

I think the pre assumed Huffman table is good enough to manage all my text files cause they all have similar contents in nature.

Since using a default Huffman table which is not accurate leads to compressed files become a bit larger than they could be, comparing with a dynamic Huffman coding method, which creates an I/O latency ,However assuming a default Huffman table prevents a lot of process and disk access.

Overall, Is this a good idea? am I helping my hardware to unzip faster? Is it a common method while talking about LRP (Limited Resource Programming)

Iman Nia
  • 2,255
  • 2
  • 15
  • 35

1 Answers1

2

Periodically generating and using a new Huffman code for a large-enough block of data has relatively little overhead, both in terms of computation time and in terms of bits in the stream. I see little point in trying to come up with a universal Huffman code for your data.

Mark Adler
  • 101,978
  • 13
  • 118
  • 158
  • Mark, You are my hero in compression and since I know you did a lot of works in compressing spatial databases for Siemens in VDO hardwares based on VXWorks OS or at least they used your zlib to compress their navigation data).do you mean using a universal Huffman table in such an application has really no point? – Iman Nia Feb 26 '17 at 19:43