Naturally the binary size is going to be a bit compiler/linker-dependent, but I've yet to find a case where using a class template and generating the appropriate template instantiations actually inflated binary size any more than the handwritten equivalent unless your handwritten tuples are exported across a dylib.
Linkers do a pretty fantastic job here at eliminating redundant code between multiple translation units. This is not something I'm merely taking for granted. At my previous workplace, we had to deal with a very obsessive mindset about binary distribution size, and had to effectively show that these kinds of class templates that had direct handwritten equivalents did not actually increase the distribution size any more than the handwritten equivalents.
There are cases where code generation of any sort can bloat binaries, but that's typically for cases where code generation is used as a static alternative to dynamic forms of branching (static vs. dynamic polymorphism, e.g.). For example, compare std::sort
to C's qsort
. If you sorted a boatload of trivially-constructible/destructible types stored contiguously with std::sort
and then qsort
, chances are that qsort
would yield a smaller binary as it involves no code generation and the only unique code required per type would be a comparator. std::sort
would generate a whole new sorting function for each type handled differently with the comparator potentially inlined.
That said, std::sort
typically runs 2-3 times faster than qsort
in exchange for the larger binary due to exchanging dynamic dispatch for static dispatch, and that's typically where you see code generation making a difference -- when the choice is between speed (with code generation) or smaller binary size (without).
There are some aesthetics that might lead you to favor the handwritten version anyway like so:
struct tuplef{
float x,y;
//... the rest of methods and operand implementations
};
... but performance and binary size should not be among them. This kind of approach can be useful if you want these various tuples to diverge more in their design or implementation. For example, you might have a tupled
which wants to align its members and use SIMD with an AoS rep, like so*:
* Not a great example of SIMD which only benefits from 128-bit XMM registers, but hopefully enough to make a point.
struct tupled{
ALIGN16 double xy[2];
//... the rest of methods and operand implementations in SIMD
};
... this kind of variation can be quite awkward and unwieldy to implement if you just have one generic tuple.
template <class T>
struct tuple{
T x,y;
//... the rest of methods and operand implementations
};
It is worth noting with a class template like this that you don't necessarily need to make everything a member function of the class. You can gain a lot more flexibility and simplicity by preferring non-members like so:
typedef tuple<float> tuplef;
typedef tuple<double> tupled;
/// 'some_operation' is only available for floating-point tuples.
double some_operation(const tupled& xy) {...}
float some_operation(const tuplef& xy) {...}
... where you can now use plain old function overloading in cases where implementations of some_operation
need to diverge from each other based on the type of tuple. You can also omit overloads of some_operation
for types where it doesn't make sense and get that kind of filtering and routing behavior you were talking about. It also helps prevent your tuple
from turning into a monolith to favor nonmembers and decouples it from operations which don't apply equally to all tuples.
You can also achieve this, of course, with some fancier techniques while still keeping everything a member of the class. Yet favoring nonmembers here for implementations which diverge between various types of tuples, or ones that only apply to certain types of tuples, can help keep the code a lot more plain. You can favor members for common denominator operations that apply to all tuples and are implemented pretty much the same way, while favoring nonmembers for operations which diverge between tuple types, e.g.