Imagine a project in which there is an interface class like the following:
struct Interface
{
virtual void f()=0;
virtual void g()=0;
virtual void h()=0;
};
Suppose that somewhere else, someone wishes to create a class implementing this interface, for which f
, g
, h
all do the same thing.
struct S : Interface
{
virtual void f() {}
virtual void g() {f();}
virtual void h() {f();}
};
Then it would be a valid optimisation to generate a vtable for S
whose entries are all pointers to S::f
, thus saving a call to the wrapping functions g
and h
.
Printing the contents of the vtable, however, shows that this optimisation is not performed:
S s;
void **vtable = *(void***)(&s); /* I'm sorry. */
for (int i = 0; i < 3; i++)
std::cout << vtable[i] << '\n';
0x400940
0x400950
0x400970
Compiling with -O3
or -Os
has no effect, as does switching between clang and gcc.
Why is this optimisation opportunity missed?
At the moment, these are the guesses that I have considered (and rejected):
- The vtable printing code actually prints garbage.
- The performance improvement is considered worthless.
- The ABI prohibits it.