1

As far as I know, custom memory managers are used in several medium and large-scale projects. This recent answer on security.se discusses the fact that a custom memory allocator in OpenSSL was included for performance reasons and ultimately ended up making the Heartbleed exploit worse. This old thread here discusses memory allocators, and in particular one of the answer links to an academic paper that shows that while people write custom memory allocators for performance reasons because malloc is slow, a general-purpose state-of-the-art allocator easily beats them and causes fewer problems than developers reinventing the wheel in every project.

As someone who does not program professionally, I am curious about how we ended up in this state and why we seem to be stuck there --- assuming my view is correct, which is not necessarily true. I imagine there must be subtler issues such as thread safety. Apologies if I am presenting the situation wrongly.

Why is the system malloc not developed and optimized to match the performance of these "general-purpose state-of-the-art allocators"? It seems to me that it should be quite an important feature for OS and standard library writers to focus on. I have heard a lot of talking about the scheduler implementation in Linux kernel in the past, for instance, and naively I would expect to see more or less the same amount of interest for memory allocators. How come standard malloc is so bad that so many people feel the need roll out a custom allocator? If there are alternative implementations that work so much better, why haven't system programmers included them in Linux and Windows, either as default or as a linking-time option?

Federico Poloni
  • 668
  • 8
  • 24
  • Your first link appears completely unrelated. – Alexey Frunze Oct 15 '16 at 12:00
  • Short answers... On custom allocators and different languages: one size doesn't fit all. On every aspect: people make mistakes (including thinking that their custom allocator is better or is correct). Proliferation of computers and internet access and both getting faster by the day makes mistakes more costly (affects more and more people and data). – Alexey Frunze Oct 15 '16 at 12:23
  • @AlexeyFrunze Oops, I copy-and-pasted the wrong URL. Fixed now. – Federico Poloni Oct 15 '16 at 13:08

1 Answers1

1

There are two problems:

  1. No single allocation scheme fits all application needs.

  2. The C library was poorly designed (or not designed). Some non-eunuchs operating systems have configurable memory managers that can allow the application to choose the allocation scheme. In eunuchs-land, the solution is to link in your own malloc/free implementation into your application.

There is no real standard malloc implementation (GNU LIBC's is the probably the closest to standard). The malloc implementations that come with the OS tend to work fine for more applications.

user3344003
  • 20,574
  • 3
  • 26
  • 62