2

I'm currently trying to implement a lossless data compression algorithm for a project I'm working on. The goal is to compress a fixed size list of floating point values. The code has to be written in C and can NOT use dynamic memory allocation. This hurts me greatly since most, if not all, lossless algorithms require some dynamic allocation.

Two of the main algorithms I've been looking into are Huffman's and Arithmetic. Would this task be possible without dynamic memory allocation? Are there any approaches or ideas you guys have? If you think it's not possible, please let me know why :-)

Any help/suggestions will help!

xDranik
  • 231
  • 1
  • 3
  • 8

1 Answers1

3

I don't see any reason either should need dynamic memory allocation. The size of the working data set is bounded, so just use an array of that size (preferably with automatic storage duration so that you don't make the code gratuitously non-reentrant, but static storage duration would also work) and do all your work in there.

R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • Also, one can use VLAs if the data size is not bounded (but likely, it doesn't even worth the effort in this case and a maximized buffer size does the job). –  Oct 09 '13 at 04:35
  • The size of a VLA can't change once it comes into scope, so you can't use it for storage that has unbounded growth. It would only be useful if you know the bound on the storage in advance. – R.. GitHub STOP HELPING ICE Oct 09 '13 at 17:43
  • But why would its size need to change? –  Oct 09 '13 at 18:33
  • For example if you were doing compression where the size of the dictionary could grow without bound... That doesn't apply for any sane real-world compression, but I'm sure fools have invented such madness before. – R.. GitHub STOP HELPING ICE Oct 09 '13 at 19:00
  • In my read, that case does not apply since "The goal is to compress a fixed size list of floating point values." –  Oct 09 '13 at 19:01