I have some text file (plain ascii) which I wish to compress them with huffman coding, since the compressed file are going to be used in a limited resource hardware, the decompressing operation must be simplified as much as possible.
So I am thinking to create a Huffman table from my text files and compress the files with it and copy the compressed files and my unzip program(which uses a default Huffman table) into my LR hardware.
I think the pre assumed Huffman table is good enough to manage all my text files cause they all have similar contents in nature.
Since using a default Huffman table which is not accurate leads to compressed files become a bit larger than they could be, comparing with a dynamic Huffman coding method, which creates an I/O latency ,However assuming a default Huffman table prevents a lot of process and disk access.
Overall, Is this a good idea? am I helping my hardware to unzip faster? Is it a common method while talking about LRP (Limited Resource Programming)