3

I've read that repetitive calls to malloc/free can be expensive and for this reason C++ standard library containers use memory pools rather than calling free in their destructors. Also, I've read, this means that the performance of C++ standard library containers can be higher than manually allocating and deallocating all necessary C-style arrays.

However, I'm confused about this, since now I'm reading in the C FAQ: ( http://c-faq.com/malloc/freetoOS.html )

Most implementations of malloc/free do not return freed memory to the operating system, but merely make it available for future malloc calls within the same program.

This means that essentially the malloc/free functions try to do the very same job as the C++ standard library containers: They try to optimize repetitive claiming/reclaiming memory by keeping memory in a pool and then giving the program pieces of this pool on request. While I can see the benefits of such an optimization if performed once, my intuition tells me that if we start doing this on a few different layers of abstraction simultaneously the performance is likely to actually decrease - as we will be duplicating the same work.

What am I misunderstanding here?

  • What you're misunderstanding here is that unless your job involves writing the C++ library itself, this should be of no concern to anyone. I could never recall that in the 20+ years of hacking C++ this is something that I really cared about, ever. – Sam Varshavchik Oct 13 '17 at 18:32
  • 2
    @SamVarshavchik This still doesn't prevent me from asking questions about "why is it constructed in this and not that way" out of plain curiosity or to learn. –  Oct 13 '17 at 19:03

2 Answers2

1

Some implementations of the standard library use memory pools.

In general, when you know the memory needs of a particular container, you might be able to do a better job of managing its memory than a general-purpose memory manager that doesn't know your container's specific needs.

For example, if you're using std::list<int> every node in the list is the same size, and having the container maintain a list of unused nodes (just two pointer assignments to add or remove a node to/from the free list) may be faster than releasing unused nodes back to the more general but more complex general-purpose memory manager used by new/delete (malloc/free).

Pete Becker
  • 74,985
  • 8
  • 76
  • 165
0

The general memory management utility called malloc is generally optimized for common case scenarios. Since the system should support multiple processes, each behaving differently, this optimization might be excellent for some applications and not that good for others. A general purpose allocator tries to consider the following generic guidelines:

  • Maximizing Compatibility: An allocator should be plug-compatible with others; in particular it should obey ANSI/POSIX conventions.
  • Maximizing Portability: Reliance on as few system-dependent features (such as system calls) as possible, while still providing optional support for other useful features found only on some systems; conformance to all known system constraints on alignment and addressing rules.
  • Minimizing Space: The allocator should not waste space: It should obtain as little memory from the system as possible, and should maintain memory in ways that minimize fragmentation -- ``holes''in contiguous chunks of memory that are not used by the program.
  • Minimizing Time: The malloc(), free() and realloc routines should be as fast as possible in the average case.
  • Maximizing Tunability: Optional features and behavior should be controllable by users either statically (via #define and the like) or dynamically (via control commands such as mallopt).
  • Maximizing Locality: Allocating chunks of memory that are typically used together near each other. This helps minimize page and cache misses during program execution.
  • Maximizing Error Detection: It does not seem possible for a general-purpose allocator to also serve as general-purpose memory error testing tool such as Purify. However, allocators should provide some means for detecting corruption due to overwriting memory, multiple frees, and so on.
  • Minimizing Anomalies

This snippet was taken from a great document written by Doug Lea about what is called Doug Lea's malloc, which was the de facto memory management algorithm for many years, and I think every programmer should read this.

On the contrary, when a container is created, many factors are known during compile-time, and even more can be predicted during run-time, for example, we knows the size of the objects we are going to hold. Using this knowledge, standard containers were written to work well with general purpose allocators.

Daniel Trugman
  • 8,186
  • 20
  • 41