9

I tend to use std *alloc/free functions to allocate/free dynamic memory in my C programs. I wonder if there are any good reasons to use the GLIB Memory Allocation functions instead of the std ones.

I'd be grateful if the comunity could point out situations where either of these solutions is a winner/looser. I am also interested in performance issues that I could hit in case I use one or the other.

Thanks !

Edited to state platforms

These programs normally run on all type of Linux/Unix distributions, normally 64 bits archs compiled using gcc 4.2.

Manuel Salvadores
  • 16,287
  • 5
  • 37
  • 56

3 Answers3

8

In my opinion, the most valuable difference between the GLib functions and the standard library ones is that the GLib functions abort the program if the allocation fails. No more checking to see if the return value from malloc() is NULL! Other than that, there's no difference in allocation strategy - g_malloc() calls malloc() internally, though as one of the other answers here states, it is possible to change that.

Another difference is that the GLib functions allow you to have (rudimentary) memory leak checking using g_mem_profile().

GLib also has a slice allocator, which is more efficient if you are allocating many equal-sized chunks of memory. This doesn't use the system malloc() and free(), but again, it is possible to change that for debugging purposes.

ptomato
  • 56,175
  • 13
  • 112
  • 165
  • On Linux, allocation never fails anyway. (With default kernel settings.) So, even though I always check malloc return value, I would feel dirty otherwise, that code path is never used anyway. – Prof. Falken Oct 28 '10 at 12:44
  • I gave up checking malloc return values a long time ago. What are you going to do if you have run out of memory? The chances are, you won't even be able to write a message to stderr. It seems appropriate to abort the program which will happen anyway as soon as the null pointer is dereferenced. – JeremyP Oct 28 '10 at 15:01
  • @JeremyP, exactly! That's why I'd rather abort the program. I can imagine a use case for testing whether there's enough memory to allocate something, but for those cases there's always `g_try_malloc()`. – ptomato Oct 28 '10 at 15:21
  • 6
    If an implementation cannot `fprintf` during an out-of-memory condition, it is extremely broken. There's no reason this function should perform any dynamic allocation, and it should not even use non-trivial stack space except when printing floating point numbers. I've lost the link but there was a really good question/answers here on SO a few months back about best-practices for handling OOM. "Just let the program crash" is great for programs that are "read-only" or working with a dataset on disk that's kept consistent, but horrible for programs holding valuable unwritten data. – R.. GitHub STOP HELPING ICE Oct 28 '10 at 16:23
  • Good point, I agree. If you find the link, please post. I think, however, that GLib/GTK are mostly used for writing desktop applications where it's not so vital, since the amount of memory available on modern desktops/laptops far exceeds what you need for the typical application - not many people work with data of a GB. If you're writing, say, an audio or video editing application, then yes, you do need to use `g_try_malloc()`. – ptomato Oct 29 '10 at 07:24
  • 1
    @R. good point. uCLibc even has a tiny internal static buffer for printf/scanf specifically for out of memory conditions. – Prof. Falken Feb 21 '11 at 11:28
  • 1
    @ptomato: Just point your desktop file manager at a directory containing a 20000x15000 jpeg or tiff file... Robustness in "desktop applications" on *nix is a joke. This situation is even worse than Windows.... – R.. GitHub STOP HELPING ICE Feb 21 '11 at 17:30
4

If you for some reason want to control the underlying allocation strategy yourself, you can use g_mem_set_vtable() to use your own functions instead of malloc() / free().

This is possible with malloc/free too through magic linking options, but GLIB exposes an explicit API for that, as well the possibility to add your own allocation and free logging with a mem-profiler-table.

Prof. Falken
  • 24,226
  • 19
  • 100
  • 173
2

depends on the underlying architecture. Under SCO Unix f.e. the malloc follows the "best-fit" strategy, which is memory-optimized but speed-delimitering.

So if your program depends on a special assumption on different systems/platforms it is always good to be in control of the malloc-strategy.

Peter Miehle
  • 5,984
  • 2
  • 38
  • 55
  • My programs always run on Linux/Unix platforms and I guess that the std glibc malloc implementations on all these systems must follow the same strategy. Am I right ? (question edited to state architecture). Thanks for your answer. – Manuel Salvadores Oct 28 '10 at 10:31
  • 1
    @msalvadores, right. Almost ALL Linux distributions use glibc (do not confuse with GLIB) malloc. Those that don't you probably never heard of. – Prof. Falken Oct 28 '10 at 11:58