6

In C++ when it is possible to implement the same functionality using either run time (sub classes, virtual functions) or compile time (templates, function overloading) polymorphism, why would you choose one over the other?

I would think that the compiled code would be larger for compile time polymorphism (more method/class definitions created for template types), and that compile time would give you more flexibility, while run time would give you "safer" polymorphism (i.e. harder to be used incorrectly by accident).

Are my assumptions correct? Are there any other advantages/disadvantages to either? Can anyone give a specific example where both would be viable options but one or the other would be a clearly better choice?

Also, does compile time polymorphism produce faster code, since it is not necessary to call functions through vtable, or does this get optimized away by the compiler anyway?

Example:

class Base
{
  virtual void print() = 0;
}

class Derived1 : Base
{
  virtual void print()
  {
     //do something different
  }
}

class Derived2 : Base
{
  virtual void print()
  {
    //do something different
  }
}

//Run time
void print(Base o)
{
  o.print();
}

//Compile time
template<typename T>
print(T o)
{
  o.print();
}
mclaassen
  • 5,018
  • 4
  • 30
  • 52
  • The major disadvantage of compile time polymorphism is that sometimes you just cannot do that. Like when you manage objects through an interface. As long as you can use the compile time one, guess it's better for the most of time. And you can make your routine accept both of them with just a little effort. – BlueWanderer Jun 01 '13 at 19:23
  • The "run time" example won't compile - one can pass a base-class pointer and achieve dynamic dispatch, but since Base is abstract you cannot make an object. – mabraham Oct 26 '15 at 21:26
  • Possible duplicate of [C++ templates for performance?](https://stackoverflow.com/questions/8925177/c-templates-for-performance) – Trevor Boyd Smith Jun 15 '18 at 13:01

3 Answers3

8

Static polymorphism produces faster code, mostly because of the possibility of aggressive inlining. Virtual functions can rarely be inlined, and mostly in a "non-polymorphic" scenarios. See this item in C++ FAQ. If speed is your goal, you basically have no choice.

On the other hand, not only compile times, but also the readability and debuggability of the code is much worse when using static polymorphism. For instance: abstract methods are a clean way of enforcing implementation of certain interface methods. To achieve the same goal using static polymorphism, you need to restore to concept checking or the curiously recurring template pattern.

The only situation when you really have to use dynamic polymorphism is when the implementation is not available at compile time; for instance, when it's loaded from a dynamic library. In practice though, you may want to exchange performance for cleaner code and faster compilation.

davmac
  • 20,150
  • 1
  • 40
  • 68
maciek gajewski
  • 903
  • 7
  • 5
  • Desperately searching for this answer: When using CRTP, how to loop over diffrent classes which DO inherit from the same base, and call the class's function which overrides the base's one? The compiler does know the list of classes at compiletime. So why dispatch to runtime :/ – Florian Apr 21 '19 at 22:34
2

After you filter out obviously bad and suboptimal cases I believe you're left with almost nothing. IMO it is pretty rare when you're facing that kind of choice. You could improve the question by stating an example, and for that a real comparison van be provided.

Assuming we have that realistic choice I'd go for the compile time solution -- why waste runtime for something not absolutely necessary? Also is something is decided at compile time it is easier to think about, follow in head and do evaluation.

Virtual functions, just like function pointers make you unable to create accurate call graphs. You can review the bottom but not easily from the top. virtual functions shall follow some rules but if they don't, you have to look all of them for the sinner.

Also there are some losses on performance, probably not a big deal in majority of cases but if no balance on the other side, why take it?

Balog Pal
  • 16,195
  • 2
  • 23
  • 37
2

In C++ when it is possible to implement the same functionality using either run time (sub classes, virtual functions) or compile time (templates, function overloading) polymorphism, why would you choose one over the other?

I would think that the compiled code would be larger for compile time polymorphism (more method/class definitions created for template types)...

Often yes - due to multiple instantiations for different combinations of template parameters, but consider:

  • with templates, only the functions actually called are instantiated
  • dead code elimination
  • constant array dimensions allowing member variables such as T mydata[12]; to be allocated with the object, automatic storage for local variables etc., whereas a runtime polymorphic implementation might need to use dynamic allocation (i.e. new[]) - this can dramatically impact cache efficiency in some cases
  • inlining of function calls, which makes trivial things like small-object get/set operations about an order of magnitude faster on the implementations I've benchmarked
  • avoiding virtual dispatch, which amounts to following a pointer to a table of function pointers, then making an out-of-line call to one of them (it's normally the out-of-line aspect that hurts performance most)

...and that compile time would give you more flexibility...

Templates certainly do:

  • given the same template instantiated for different types, the same code can mean different things: for example, T::f(1) might call a void f(int) noexcept function in one instantiation, a virtual void f(double) in another, a T::f functor object's operator()(float) in yet another; looking at it from another perspective, different parameter types can provide what the templated code needs in whatever way suits them best

  • SFINAE lets your code adjust at compile time to use the most efficient interfaces objects supports, without the objects actively having to make a recommendation

  • due to the instantiate-only-functions-called aspect mentioned above, you can "get away" with instantiating a class template with a type for which only some of the class template's functions would compile: in some ways that's bad because programmers may expect that their seemingly working Template<MyType> will support all the operations that the Template<> supports for other types, only to have it fail when they try a specific operation; in other ways it's good because you can still use Template<> if you're not interested in all the operations

    • if Concepts [Lite] make it into a future C++ Standard, programmers will have the option of putting stronger up-front contraints on the semantic operations that types used as template paramters must support, which will avoid nasty surprises as a user finds their Template<MyType>::operationX broken, and generally give simpler error messages earlier in the compile

...while run time would give you "safer" polymorphism (i.e. harder to be used incorrectly by accident).

Arguably, as they're more rigid given the template flexibility above. The main "safety" problems with runtime polymorphism are:

  • some problems end up encouraging "fat" interfaces (in the sense Stroustrup mentions in The C++ Programming Language): APIs with functions that only work for some of the derived types, and algorithmic code needs to keep "asking" the derived types "should I do this for you", "can you do this", "did that work" etc..

  • you need virtual destructors: some classes don't have them (e.g. std::vector) - making it harder to derive from them safely, and the in-object pointers to virtual dispatch tables aren't valid across processes, making it hard to put runtime polymorphic objects in shared memory for access by multiple processes

Can anyone give a specific example where both would be viable options but one or the other would be a clearly better choice?

Sure. Say you're writing a quick-sort function: you could only support data types that derive from some Sortable base class with a virtual comparison function and a virtual swap function, or you could write a sort template that uses a Less policy parameter defaulting to std::less<T>, and std::swap<>. Given the performance of a sort is overwhelmingly dominated by the performance of these comparison and swap operations, a template is massively better suited to this. That's why C++ std::sort clearly outperforms the C library's generic qsort function, which uses function pointers for what's effectively a C implementation of virtual dispatch. See here for more about that.

Also, does compile time polymorphism produce faster code, since it is not necessary to call functions through vtable, or does this get optimized away by the compiler anyway?

It's very often faster, but very occasionally the sum impact of template code bloat may overwhelm the myriad ways compile time polymorphism is normally faster, such that on balance it's worse.

Community
  • 1
  • 1
Tony Delroy
  • 102,968
  • 15
  • 177
  • 252