So I'm creating a library that will have a class someBase {}; which will be derived by downstream users in a number of classes.
class someBase {
public:
virtual void foo()=0;
};
What I also have, is a vector of pointers to someBase and I'm doing this :-
vector <someBase*> children;
// downstream user code populates children with some objects over here
for (i=0; i<children.size(); i++)
children[i]->foo();
Now profiling suggests that the branch mispredictions on the virtual calls is one (of the several) bottlenecks in my code. What I'm looking to do is somehow access the RTTI of the objects and use that to sort the vector of children according to class type to improve both instruction cache locality and branch prediction.
Any suggestions/solutions on how this can be done?
The main challenges to keep in mind are :-
1.) I don't really know which or how many classes are going to be derived from someBase. Hypothetically, I could have a global enum in some common file somewhere that downstream users can edit to add their own class-type and then sort on that (basically implementing my own RTTI). But that's an ugly solution.
2.) PiotrNycz suggests in his answer below to use type_info. However, only != and == are defined for that. Any ideas on how to derive a strict weak ordering on type_info?
3.) I'm really looking to improve branch prediction and instruction cache locality so if there is an alternative solution, that would also be welcome.