4

I was going through some Real time OS specifications and I read that In RTOS we usually don't prefer to use malloc. The reason for this was given as: for performance issue we should not use malloc as it is time consuming to allocate memory through malloc and also overhead to keep track of allocated memory is higher.

Now In real time systems, time constraint is there with all the processes, we generally don't use malloc. I got curious and started researching a bit like how memory is actually allocated at run time in RTOS and I found Memory pools. Now It was written that memory pools actually means as fixed size block allocation.Now advantages of Memory pools is that It doesn't suffers from fragmentation. How is it possible? Suppose that we have 3 pools of 4 bytes and the application requires 10 bytes, so in this case memory pools will suffer from internal fragmentation.

How do the memory pools work and how the memory is allocated? Do the applications get the pools at compile time like a particular application will get 3 pools from pool size of 4bytes? What if they require memory that cannot fit in the pools. Are many memory pools of different sizes present in such system? Please explain it to me.

Sohrab Ahmad
  • 145
  • 3
  • 8

2 Answers2

1

Well, fragmentation would depend on the memory pool implementation. Generally, a memory pool is a pool of blocks of memory of a fixed size. when something wants a block of memory of that size, it goes to that pool. Thus, there's no fragmentation because everything that wants a block of that size gets it from a pool of blocks of that size.

Now, if a pool of blocks of a particular size does not exist, then a pool of a larger size can be used. If that occurs then technically there is fragmentation because a certain part of the allocated memory block is not used (fragmented).

If all memory pools supplied blocks of all the required sizes then there would be no fragmentation.

Peter Ritchie
  • 35,463
  • 9
  • 80
  • 98
0

Pools don't eliminate fragmentation, but they can dramatically reduce it, and also possibly reduce the overhead of allocating a very large number of very small blocks. One good scheme is a library that allows client code to create a pool for each of its highly scaled structs. On pool creation, you specify the block size, the number of blocks to allocate initially and grow by, plus a text name for debugging.

To allocate a block, you pass the pool ID to the allocator. Whenever the pool has no free blocks, it allocates a contiguous chunk of blocks and makes them available, returning one of them. Whenever a block is freed, if all blocks in that block's chunk are free, it frees the chunk.

For debug, there's a routine that prints all pools, giving the description, the number allocated, and possibly other stats like the number of free ones available (if this is high, there are fragmentation issues) and the max ever allocated. Very helpful for finding memory leaks.

The worst case for this type of library is for a subsystem that allocates a large number of blocks and then frees a random majority of them, early in the life of the system. Lots of chunks will remain allocated but with few blocks in use. The best case (compared with malloc) is with continuous cycling of needing new blocks with widely varying lifetimes, for systems that have to stay up for long durations, like certain embedded systems.

This is simplest and works best for single-threaded applications. For multi-threaded applications, care has to be taken to make it thread-safe, and you might need to mimic the optimizations that malloc() often does under the covers to minimize locking overhead (e.g., per-thread "arenas").

Jeff Learman
  • 2,914
  • 1
  • 22
  • 31