7

In implementing a menu on an embedded system in C(++) (AVR-Gcc), I ended up with void function pointer that take arguments, and usually make use of them.

// void function prototype
void (*auxFunc)(char *);

In some cases (in fact quite a few), the function actually doesn't need the argument, so I would do something like:

if (something)    doAuxFunc(NULL);

I know I could just overload to a different function type, but I'm actually trying not to do this as I am instantiating multiple objects and want to keep them light.

Is calling multiple functions with NULL pointers (when they are intended for an actual pointer) worse than implementing many more function prototypes?

falro
  • 125
  • 6
  • 6
    What does the profiler say? – zoul Aug 13 '12 at 14:47
  • 2
    Why do you think overloads are "heavy"? – Kerrek SB Aug 13 '12 at 14:48
  • @KerrekSB Mostly because there will be (relatively) many objects holding these, and space is valuable on embedded systems. In this case a menu is more of an auxiliary part of the main functionality of the project. – falro Aug 13 '12 at 15:19
  • 3
    @falro: _"Mostly because there will be (relatively) many objects holding these"_ -- You do realize that overloading creates exactly two instances instead of one, even for a million objects? It's not like every object gets to own a new copy of executable code. – Damon Aug 13 '12 at 15:41
  • @Damon Ah yes, you're right. Didn't think that one through. – falro Aug 13 '12 at 16:13
  • Overloading will only add to the code and data memory usage. But the memory consumption is per-function and does not increase per-object. The pointers to those overloaded functions would obviously scale per-object that holds a pointer. But the memory consumption _per-object_ for overloaded or non-overloaded functions is the same as the compiler determines which to use at compile time and it becomes a simple function call in the object code. – syplex Aug 13 '12 at 18:09

4 Answers4

9

Checking for NULLs is a very small overhead even on a microcontroller - comparison against 0 is supposed to be lightning fast. If you overload several functions, you'll crucify readability for (a very slight) improvement in performance. Just let GCC's optimizer do its stuff, it's pretty good at it :)

3

Look at the disassembly, it should be generating a null (zero) to pass as the first argument, which either burns a register or a stack location, if it burns a register then it may cost you a push and pop if the calling function is starving for registers. (just using a function call may cost you pushes and pops if the function is starving for registers in order to implement the calling convention).

So there is likely a cost, but it may not be enough of a cost to change the way you do things.

old_timer
  • 69,149
  • 8
  • 89
  • 168
2

Checking for 0 is really cheap, overloading is even cheaper, since it is decided at compile time which function to chose.

But if you think that your interfaces get too complicated with overloading and your function is small you should declare it inline and put it in a header. Checkig for 0 can then easily be optimized away by any decent modern compiler.

Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177
  • Since this is a solution for an embedded system, inlining would result in more code memory being used which is also expensive in this case. – syplex Aug 13 '12 at 18:05
  • @syplex: That very much depends on the size of function call and cleanup. Inlining can _save_ space. This is especially the case when a reasonable optimizer can then eliminate unneeded instructions. A common simple example is a `Foo::getBar` method that provides read-only access to the Bar member in Foo. This may end up as a simple pointer offset calculation, and could even be folded with other offset calculations. – MSalters Aug 14 '12 at 07:03
  • gcc now has the feature of doing a partial inlining. Basically it will create a function core depending on all the parameters and then inline the part that can be constant propagated. This has not much size overhead. – Jens Gustedt Aug 14 '12 at 07:15
1

I think the "tradeoff" is ridiculously low for each approach but this is the time to do benchmarks for yourself. If you do so, please post some results :)

SLOBY
  • 1,007
  • 2
  • 10
  • 24